2018-04-25T12:28:31.021-0400 I CONTROL [initandlisten] MongoDB starting : pid=90134 port=9007 dbpath=/tmp/mms-automation/test/output/data/process9007 64-bit host=louisamac 2018-04-25T12:28:31.022-0400 I CONTROL [initandlisten] DEBUG build (which is slower) 2018-04-25T12:28:31.022-0400 I CONTROL [initandlisten] db version v3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea 2018-04-25T12:28:31.023-0400 I CONTROL [initandlisten] git version: 5122108624ee89a2488790e96bade2773009f3a5 2018-04-25T12:28:31.023-0400 I CONTROL [initandlisten] allocator: system 2018-04-25T12:28:31.024-0400 I CONTROL [initandlisten] modules: none 2018-04-25T12:28:31.024-0400 I CONTROL [initandlisten] build environment: 2018-04-25T12:28:31.025-0400 I CONTROL [initandlisten] distarch: x86_64 2018-04-25T12:28:31.025-0400 I CONTROL [initandlisten] target_arch: x86_64 2018-04-25T12:28:31.026-0400 I CONTROL [initandlisten] options: { config: "cfg1.json", net: { bindIp: "0.0.0.0", port: 9007 }, processManagement: { fork: true }, replication: { oplogSizeMB: 64, replSetName: "csrs" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/tmp/mms-automation/test/output/data/process9007", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { destination: "file", path: "/tmp/mms-automation/test/logs/run9007" } } 2018-04-25T12:28:31.027-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7680M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress), 2018-04-25T12:28:31.593-0400 I STORAGE [initandlisten] WiredTiger message [1524673711:593201][90134:0x7fffa6128340], txn-recover: Set global recovery timestamp: 0 2018-04-25T12:28:31.644-0400 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0) 2018-04-25T12:28:31.737-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger 2018-04-25T12:28:31.738-0400 I CONTROL [initandlisten] 2018-04-25T12:28:31.739-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea) of MongoDB. 2018-04-25T12:28:31.739-0400 I CONTROL [initandlisten] ** Not recommended for production. 2018-04-25T12:28:31.740-0400 I CONTROL [initandlisten] 2018-04-25T12:28:31.741-0400 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-04-25T12:28:31.742-0400 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-04-25T12:28:31.743-0400 I CONTROL [initandlisten] 2018-04-25T12:28:31.752-0400 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 00d7673c-4fa4-4f0c-b538-f1b0d1a17048 2018-04-25T12:28:31.834-0400 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/tmp/mms-automation/test/output/data/process9007/diagnostic.data' 2018-04-25T12:28:31.842-0400 I SHARDING [thread1] creating distributed lock ping thread for process ConfigServer (sleeping for 30000ms) 2018-04-25T12:28:31.842-0400 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 30d88e88-2143-40a4-b0d6-d5662fd2e373 2018-04-25T12:28:31.842-0400 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s 2018-04-25T12:28:31.925-0400 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: 2d912ae5-63dd-4cf4-ac0d-0ed22ad5b61b 2018-04-25T12:28:32.003-0400 I REPL [initandlisten] Did not find local voted for document at startup. 2018-04-25T12:28:32.006-0400 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one. 2018-04-25T12:28:32.006-0400 W REPL [ftdc] Rollback ID is not initialized yet. 2018-04-25T12:28:32.007-0400 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: dae7a0a2-4792-4953-a399-98ccfb4c3f1c 2018-04-25T12:28:32.081-0400 I REPL [initandlisten] Initialized the rollback ID to 1 2018-04-25T12:28:32.082-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset 2018-04-25T12:28:32.083-0400 I NETWORK [initandlisten] waiting for connections on port 9007 2018-04-25T12:29:01.849-0400 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s 2018-04-25T12:29:31.852-0400 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s 2018-04-25T12:30:01.861-0400 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s 2018-04-25T12:30:31.863-0400 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s 2018-04-25T12:31:01.871-0400 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s 2018-04-25T12:31:31.875-0400 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s 2018-04-25T12:32:01.884-0400 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s 2018-04-25T12:32:27.649-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59094 #1 (1 connection now open) 2018-04-25T12:32:27.650-0400 I NETWORK [conn1] received client metadata from 127.0.0.1:59094 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.2" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:32:31.891-0400 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s 2018-04-25T12:33:01.892-0400 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s 2018-04-25T12:33:31.893-0400 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s 2018-04-25T12:33:32.088-0400 I SHARDING [thread3] Refreshing cached database entry for config; current cached database info is {} 2018-04-25T12:33:32.090-0400 I SHARDING [ConfigServerCatalogCacheLoader-0] Refresh for database config took 0 ms and found { _id: "config", primary: "config", partitioned: true } 2018-04-25T12:33:32.091-0400 I CONTROL [thread3] Failed to create config.system.sessions: Cannot create config.system.sessions until there are shards, will try again at the next refresh interval 2018-04-25T12:33:32.092-0400 I CONTROL [thread3] Sessions collection is not set up; waiting until next sessions refresh interval: Cannot create config.system.sessions until there are shards 2018-04-25T12:33:34.083-0400 I REPL [conn1] replSetInitiate admin command received from client 2018-04-25T12:33:34.085-0400 W NETWORK [conn1] getaddrinfo("louicamac") failed: nodename nor servname provided, or not known 2018-04-25T12:33:34.090-0400 E REPL [conn1] replSet initiate got NodeNotFound: No host described in new configuration 1 for replica set csrs maps to this node while validating { configsvr: true, protocolVersion: 1.0, _id: "csrs", members: [ { _id: 0.0, host: "louicamac:9007" }, { _id: 1.0, host: "louisamac:9008" }, { _id: 2.0, host: "louisamac:9009" } ], version: 1 } 2018-04-25T12:33:41.385-0400 I REPL [conn1] replSetInitiate admin command received from client 2018-04-25T12:33:41.390-0400 I REPL [conn1] replSetInitiate config object with 3 members parses ok 2018-04-25T12:33:41.391-0400 I ASIO [conn1] Connecting to louisamac:9008 2018-04-25T12:33:41.391-0400 I ASIO [conn1] Connecting to louisamac:9009 2018-04-25T12:33:41.395-0400 I REPL [conn1] ****** 2018-04-25T12:33:41.395-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59114 #8 (2 connections now open) 2018-04-25T12:33:41.396-0400 I REPL [conn1] creating replication oplog of size: 64MB... 2018-04-25T12:33:41.397-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59115 #9 (3 connections now open) 2018-04-25T12:33:41.397-0400 I NETWORK [conn8] received client metadata from 127.0.0.1:59114 conn8: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:33:41.397-0400 I STORAGE [conn1] createCollection: local.oplog.rs with generated UUID: 79654efd-47c9-4932-bdd7-c3f6614577ea 2018-04-25T12:33:41.398-0400 I NETWORK [conn9] received client metadata from 127.0.0.1:59115 conn9: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:33:41.442-0400 I STORAGE [conn1] Starting WiredTigerRecordStoreThread local.oplog.rs 2018-04-25T12:33:41.443-0400 I STORAGE [conn1] The size storer reports that the oplog contains 0 records totaling to 0 bytes 2018-04-25T12:33:41.443-0400 I STORAGE [conn1] Scanning the oplog to determine where to place markers for truncation 2018-04-25T12:33:41.540-0400 I REPL [conn1] ****** 2018-04-25T12:33:41.545-0400 I STORAGE [conn1] createCollection: local.system.replset with generated UUID: 91066e1e-b6bd-4bd6-a67a-717b28ee65e7 2018-04-25T12:33:41.720-0400 I STORAGE [conn1] createCollection: admin.system.version with provided UUID: 6f9e57a2-0b0a-440a-a916-ec75f8664cfe 2018-04-25T12:33:41.795-0400 I COMMAND [conn1] setting featureCompatibilityVersion to 4.0 2018-04-25T12:33:41.796-0400 I NETWORK [conn1] Skip closing connection for connection # 9 2018-04-25T12:33:41.796-0400 I NETWORK [conn1] Skip closing connection for connection # 8 2018-04-25T12:33:41.797-0400 I NETWORK [conn1] Skip closing connection for connection # 1 2018-04-25T12:33:41.798-0400 I REPL [conn1] New replica set config in use: { _id: "csrs", version: 1, configsvr: true, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "louisamac:9007", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "louisamac:9008", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "louisamac:9009", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5ae0ade5b7d3c00d69bcf532') } } 2018-04-25T12:33:41.799-0400 I REPL [conn1] This node is louisamac:9007 in the config 2018-04-25T12:33:41.800-0400 I REPL [conn1] transition to STARTUP2 from STARTUP 2018-04-25T12:33:41.801-0400 I REPL [conn1] Starting replication storage threads 2018-04-25T12:33:41.801-0400 I REPL [replexec-0] Member louisamac:9008 is now in state STARTUP 2018-04-25T12:33:41.802-0400 I REPL [replexec-1] Member louisamac:9009 is now in state STARTUP 2018-04-25T12:33:41.803-0400 I REPL [conn1] transition to RECOVERING from STARTUP2 2018-04-25T12:33:41.804-0400 I REPL [conn1] Starting replication fetcher thread 2018-04-25T12:33:41.805-0400 I REPL [conn1] Starting replication applier thread 2018-04-25T12:33:41.805-0400 I REPL [conn1] Starting replication reporter thread 2018-04-25T12:33:41.805-0400 I REPL [rsSync-0] Starting oplog application 2018-04-25T12:33:41.806-0400 I COMMAND [conn1] command local.system.replset appName: "MongoDB Shell" command: replSetInitiate { replSetInitiate: { configsvr: true, protocolVersion: 1.0, _id: "csrs", members: [ { _id: 0.0, host: "louisamac:9007" }, { _id: 1.0, host: "louisamac:9008" }, { _id: 2.0, host: "louisamac:9009" } ] }, $clusterTime: { clusterTime: Timestamp(0, 0), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{ Global: { acquireCount: { r: 15, w: 6, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 107 } }, Database: { acquireCount: { r: 3, w: 3, W: 3 } }, Collection: { acquireCount: { r: 1, w: 2 } }, oplog: { acquireCount: { r: 2, w: 3 } } } protocol:op_msg 420ms 2018-04-25T12:33:41.807-0400 I REPL [rsSync-0] transition to SECONDARY from RECOVERING 2018-04-25T12:33:43.404-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59117 #10 (4 connections now open) 2018-04-25T12:33:43.405-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59116 #11 (5 connections now open) 2018-04-25T12:33:43.405-0400 I NETWORK [conn10] end connection 127.0.0.1:59117 (4 connections now open) 2018-04-25T12:33:43.406-0400 I NETWORK [conn11] end connection 127.0.0.1:59116 (3 connections now open) 2018-04-25T12:33:43.817-0400 I REPL [replexec-0] Member louisamac:9008 is now in state STARTUP2 2018-04-25T12:33:43.826-0400 I REPL [replexec-1] Member louisamac:9009 is now in state STARTUP2 2018-04-25T12:33:43.996-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59122 #12 (4 connections now open) 2018-04-25T12:33:44.009-0400 I NETWORK [conn12] received client metadata from 127.0.0.1:59122 conn12: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:33:44.032-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59123 #13 (5 connections now open) 2018-04-25T12:33:44.043-0400 I NETWORK [conn13] received client metadata from 127.0.0.1:59123 conn13: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:33:44.158-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59124 #14 (6 connections now open) 2018-04-25T12:33:44.168-0400 I NETWORK [conn14] received client metadata from 127.0.0.1:59124 conn14: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:33:44.183-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59125 #15 (7 connections now open) 2018-04-25T12:33:44.184-0400 I NETWORK [conn15] received client metadata from 127.0.0.1:59125 conn15: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:33:44.328-0400 I REPL [replexec-1] Member louisamac:9009 is now in state SECONDARY 2018-04-25T12:33:44.821-0400 I REPL [replexec-1] Member louisamac:9008 is now in state SECONDARY 2018-04-25T12:33:52.365-0400 I REPL [replexec-1] Starting an election, since we've seen no PRIMARY in the past 10000ms 2018-04-25T12:33:52.366-0400 I REPL [replexec-1] conducting a dry run election to see if we could be elected. current term: 0 2018-04-25T12:33:52.367-0400 I REPL [replexec-0] VoteRequester(term 0 dry run) received a yes vote from louisamac:9008; response message: { term: 0, voteGranted: true, reason: "", ok: 1.0, operationTime: Timestamp(1524674021, 1), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, $clusterTime: { clusterTime: Timestamp(1524674021, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, lastCommittedOpTime: Timestamp(0, 0) } 2018-04-25T12:33:52.368-0400 I REPL [replexec-0] dry election run succeeded, running for election in term 1 2018-04-25T12:33:52.369-0400 I STORAGE [replexec-0] createCollection: local.replset.election with generated UUID: 3b8ba32f-fc2a-42c6-b48d-31e45a34e945 2018-04-25T12:33:52.565-0400 I REPL [replexec-1] VoteRequester(term 1) received a yes vote from louisamac:9009; response message: { term: 1, voteGranted: true, reason: "", ok: 1.0, operationTime: Timestamp(1524674021, 1), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, $clusterTime: { clusterTime: Timestamp(1524674021, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, lastCommittedOpTime: Timestamp(0, 0) } 2018-04-25T12:33:52.568-0400 I REPL [replexec-1] election succeeded, assuming primary role in term 1 2018-04-25T12:33:52.579-0400 I REPL [replexec-1] transition to PRIMARY from SECONDARY 2018-04-25T12:33:52.590-0400 I REPL [replexec-1] Entering primary catch-up mode. 2018-04-25T12:33:52.600-0400 I ASIO [replexec-4] Connecting to louisamac:9008 2018-04-25T12:33:52.614-0400 I REPL [replexec-0] Caught up to the latest optime known via heartbeats after becoming primary. 2018-04-25T12:33:52.615-0400 I REPL [replexec-0] Exited primary catch-up mode. 2018-04-25T12:33:52.615-0400 I REPL [replexec-0] Stopping replication producer 2018-04-25T12:33:53.844-0400 I STORAGE [rsSync-0] createCollection: config.chunks with generated UUID: 152e721e-fb06-4d66-b5db-c91bfe40fccc 2018-04-25T12:33:53.963-0400 I INDEX [rsSync-0] build index on: config.chunks properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2018-04-25T12:33:53.967-0400 I INDEX [rsSync-0] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-04-25T12:33:53.979-0400 I INDEX [rsSync-0] build index done. scanned 0 total records. 0 secs 2018-04-25T12:33:53.980-0400 I COMMAND [rsSync-0] command config.$cmd command: createIndexes { createIndexes: "chunks", indexes: [ { name: "ns_1_min_1", key: { ns: 1, min: 1 }, unique: true } ], $db: "config" } numYields:0 reslen:348 locks:{ Global: { acquireCount: { r: 14, w: 6, W: 1 } }, Database: { acquireCount: { r: 4, w: 5, W: 1 } }, Collection: { acquireCount: { r: 3, w: 2 } }, oplog: { acquireCount: { r: 1, w: 4 } } } protocol:op_msg 136ms 2018-04-25T12:33:54.015-0400 I INDEX [rsSync-0] build index on: config.chunks properties: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2018-04-25T12:33:54.020-0400 I INDEX [rsSync-0] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-04-25T12:33:54.032-0400 I INDEX [rsSync-0] build index done. scanned 0 total records. 0 secs 2018-04-25T12:33:54.033-0400 I NETWORK [conn12] end connection 127.0.0.1:59122 (6 connections now open) 2018-04-25T12:33:54.066-0400 I INDEX [rsSync-0] build index on: config.chunks properties: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2018-04-25T12:33:54.070-0400 I INDEX [rsSync-0] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-04-25T12:33:54.084-0400 I INDEX [rsSync-0] build index done. scanned 0 total records. 0 secs 2018-04-25T12:33:54.085-0400 I STORAGE [rsSync-0] createCollection: config.migrations with generated UUID: 751f25d5-b2df-4623-bf8f-898056d44a54 2018-04-25T12:33:54.186-0400 I NETWORK [conn14] end connection 127.0.0.1:59124 (5 connections now open) 2018-04-25T12:33:54.206-0400 I INDEX [rsSync-0] build index on: config.migrations properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" } 2018-04-25T12:33:54.207-0400 I INDEX [rsSync-0] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-04-25T12:33:54.221-0400 I INDEX [rsSync-0] build index done. scanned 0 total records. 0 secs 2018-04-25T12:33:54.222-0400 I COMMAND [rsSync-0] command config.$cmd command: createIndexes { createIndexes: "migrations", indexes: [ { name: "ns_1_min_1", key: { ns: 1, min: 1 }, unique: true } ], $db: "config" } numYields:0 reslen:348 locks:{ Global: { acquireCount: { r: 24, w: 16, W: 1 } }, Database: { acquireCount: { r: 4, w: 12, W: 4 } }, Collection: { acquireCount: { r: 3, w: 5 } }, oplog: { acquireCount: { r: 1, w: 11 } } } protocol:op_msg 137ms 2018-04-25T12:33:54.223-0400 I STORAGE [rsSync-0] createCollection: config.shards with generated UUID: 22c4d8f5-f90a-4afe-8439-3ec7535307c3 2018-04-25T12:33:54.323-0400 I INDEX [rsSync-0] build index on: config.shards properties: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2018-04-25T12:33:54.328-0400 I INDEX [rsSync-0] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-04-25T12:33:54.340-0400 I INDEX [rsSync-0] build index done. scanned 0 total records. 0 secs 2018-04-25T12:33:54.341-0400 I COMMAND [rsSync-0] command config.$cmd command: createIndexes { createIndexes: "shards", indexes: [ { name: "host_1", key: { host: 1 }, unique: true } ], $db: "config" } numYields:0 reslen:348 locks:{ Global: { acquireCount: { r: 28, w: 20, W: 1 } }, Database: { acquireCount: { r: 4, w: 15, W: 5 } }, Collection: { acquireCount: { r: 3, w: 6 } }, oplog: { acquireCount: { r: 1, w: 14 } } } protocol:op_msg 118ms 2018-04-25T12:33:54.342-0400 I STORAGE [rsSync-0] createCollection: config.locks with generated UUID: 2f80f67d-fa35-49bd-87f4-71e474c67def 2018-04-25T12:33:54.459-0400 I INDEX [rsSync-0] build index on: config.locks properties: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } 2018-04-25T12:33:54.464-0400 I INDEX [rsSync-0] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-04-25T12:33:54.479-0400 I INDEX [rsSync-0] build index done. scanned 0 total records. 0 secs 2018-04-25T12:33:54.480-0400 I COMMAND [rsSync-0] command config.$cmd command: createIndexes { createIndexes: "locks", indexes: [ { name: "ts_1", key: { ts: 1 }, unique: false } ], $db: "config" } numYields:0 reslen:348 locks:{ Global: { acquireCount: { r: 32, w: 24, W: 1 } }, Database: { acquireCount: { r: 4, w: 18, W: 6 } }, Collection: { acquireCount: { r: 3, w: 7 } }, oplog: { acquireCount: { r: 1, w: 17 } } } protocol:op_msg 137ms 2018-04-25T12:33:54.513-0400 I INDEX [rsSync-0] build index on: config.locks properties: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } 2018-04-25T12:33:54.517-0400 I INDEX [rsSync-0] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-04-25T12:33:54.529-0400 I INDEX [rsSync-0] build index done. scanned 0 total records. 0 secs 2018-04-25T12:33:54.530-0400 I STORAGE [rsSync-0] createCollection: config.lockpings with generated UUID: 5f9a872f-6afc-42a3-963c-563e9ba5b190 2018-04-25T12:33:54.651-0400 I INDEX [rsSync-0] build index on: config.lockpings properties: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } 2018-04-25T12:33:54.655-0400 I INDEX [rsSync-0] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-04-25T12:33:54.668-0400 I INDEX [rsSync-0] build index done. scanned 0 total records. 0 secs 2018-04-25T12:33:54.670-0400 I COMMAND [rsSync-0] command config.$cmd command: createIndexes { createIndexes: "lockpings", indexes: [ { name: "ping_1", key: { ping: 1 }, unique: false } ], $db: "config" } numYields:0 reslen:348 locks:{ Global: { acquireCount: { r: 39, w: 31, W: 1 } }, Database: { acquireCount: { r: 4, w: 23, W: 8 } }, Collection: { acquireCount: { r: 3, w: 9 } }, oplog: { acquireCount: { r: 1, w: 22 } } } protocol:op_msg 139ms 2018-04-25T12:33:54.671-0400 I STORAGE [rsSync-0] createCollection: config.tags with generated UUID: d67d69d8-3b03-4acc-806f-3a72e8fd3ce2 2018-04-25T12:33:54.782-0400 I INDEX [rsSync-0] build index on: config.tags properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" } 2018-04-25T12:33:54.786-0400 I INDEX [rsSync-0] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-04-25T12:33:54.815-0400 I INDEX [rsSync-0] build index done. scanned 0 total records. 0 secs 2018-04-25T12:33:54.816-0400 I COMMAND [rsSync-0] command config.$cmd command: createIndexes { createIndexes: "tags", indexes: [ { name: "ns_1_min_1", key: { ns: 1, min: 1 }, unique: true } ], $db: "config" } numYields:0 reslen:348 locks:{ Global: { acquireCount: { r: 43, w: 35, W: 1 } }, Database: { acquireCount: { r: 4, w: 26, W: 9 } }, Collection: { acquireCount: { r: 3, w: 10 } }, oplog: { acquireCount: { r: 1, w: 25 } } } protocol:op_msg 145ms 2018-04-25T12:33:54.871-0400 I INDEX [rsSync-0] build index on: config.tags properties: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" } 2018-04-25T12:33:54.875-0400 I INDEX [rsSync-0] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-04-25T12:33:54.887-0400 I INDEX [rsSync-0] build index done. scanned 0 total records. 0 secs 2018-04-25T12:33:54.889-0400 I STORAGE [rsSync-0] createCollection: config.version with generated UUID: aecaf7fb-89ae-4102-9204-b8f975441740 2018-04-25T12:33:54.968-0400 I STORAGE [rsSync-0] createCollection: config.transactions with generated UUID: 5b16f596-7044-41b4-9910-a4bca5b278bc 2018-04-25T12:33:54.968-0400 I SHARDING [Balancer] CSRS balancer is starting 2018-04-25T12:33:55.053-0400 I REPL [rsSync-0] transition to primary complete; database writes are now permitted 2018-04-25T12:33:55.058-0400 I COMMAND [ftdc] serverStatus was very slow: { after basic: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after dur: 0, after extra_info: 0, after globalLock: 0, after locks: 0, after logicalSessionRecordCache: 0, after network: 0, after opLatencies: 0, after opcounters: 0, after opcountersRepl: 0, after repl: 0, after shardingStatistics: 0, after storageEngine: 0, after transactions: 0, after wiredTiger: 1049, at end: 1049 } 2018-04-25T12:33:55.152-0400 I COMMAND [conn15] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 }, projection: { ts: 1, t: 1 }, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1524674033, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:1 nreturned:1 reslen:337 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 702633 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_msg 797ms 2018-04-25T12:33:55.155-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59127 #17 (6 connections now open) 2018-04-25T12:33:55.156-0400 I NETWORK [conn17] received client metadata from 127.0.0.1:59127 conn17: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:33:55.169-0400 I STORAGE [conn15] Triggering the first stable checkpoint. Initial Data: Timestamp(1524674021, 1) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1524674033, 1) 2018-04-25T12:33:55.359-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59128 #18 (7 connections now open) 2018-04-25T12:33:55.371-0400 I NETWORK [conn18] received client metadata from 127.0.0.1:59128 conn18: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:33:57.168-0400 I SHARDING [Balancer] CSRS balancer thread is recovering 2018-04-25T12:33:57.170-0400 I SHARDING [Balancer] CSRS balancer thread is recovered 2018-04-25T12:33:57.174-0400 I STORAGE [monitoring keys for HMAC] createCollection: admin.system.keys with generated UUID: d2c7a208-1b90-4f18-a1b0-6331a4e7f2b8 2018-04-25T12:33:57.699-0400 I COMMAND [monitoring keys for HMAC] command admin.system.keys command: insert { insert: "system.keys", bypassDocumentValidation: false, ordered: true, documents: [ { _id: 6548425113090392089, purpose: "HMAC", key: BinData(0, E47DC74867AB55C48998252501963D160C33EDA5), expiresAt: Timestamp(1532450034, 0) } ], writeConcern: { w: "majority", wtimeout: 15000 }, allowImplicitCollectionCreation: true, $db: "admin" } ninserted:1 keysInserted:1 numYields:0 reslen:339 locks:{ Global: { acquireCount: { r: 6, w: 5 } }, Database: { acquireCount: { r: 1, w: 2, W: 3 } }, Collection: { acquireCount: { r: 1, w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_msg 524ms 2018-04-25T12:34:01.883-0400 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document 2018-04-25T12:34:02.463-0400 I ASIO [NetworkInterfaceASIO-Replication] Ending connection to host louisamac:9008 due to bad connection status; 1 connections to that host remain open 2018-04-25T12:34:40.301-0400 I NETWORK [conn1] end connection 127.0.0.1:59094 (6 connections now open) 2018-04-25T12:34:52.032-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59137 #19 (7 connections now open) 2018-04-25T12:34:52.033-0400 I NETWORK [conn19] received client metadata from 127.0.0.1:59137 conn19: { driver: { name: "MongoDB Internal Client", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:34:52.036-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59139 #20 (8 connections now open) 2018-04-25T12:34:52.037-0400 I NETWORK [conn20] received client metadata from 127.0.0.1:59139 conn20: { driver: { name: "MongoDB Internal Client", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:34:52.040-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59142 #21 (9 connections now open) 2018-04-25T12:34:52.041-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59143 #22 (10 connections now open) 2018-04-25T12:34:52.041-0400 I NETWORK [conn21] received client metadata from 127.0.0.1:59142 conn21: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:34:52.042-0400 I NETWORK [conn22] received client metadata from 127.0.0.1:59143 conn22: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:34:52.045-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59145 #23 (11 connections now open) 2018-04-25T12:34:52.055-0400 I NETWORK [conn23] received client metadata from 127.0.0.1:59145 conn23: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:34:52.087-0400 I STORAGE [conn21] createCollection: config.mongos with generated UUID: 4f4a41b9-41eb-433c-9a54-df69e9a049a0 2018-04-25T12:34:52.307-0400 I COMMAND [conn21] command config.$cmd command: update { update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "louisamac:9010" }, u: { $set: { _id: "louisamac:9010", ping: new Date(1524674092081), up: 0, waiting: true, mongoVersion: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea", advisoryHostFQDNs: [] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000, $replData: 1, $clusterTime: { clusterTime: Timestamp(1524674092, 1), signature: { hash: BinData(0, E043C409D6C6A32FF7352110E81C1A2C71AC1880), keyId: 6548425113090392089 } }, $configServerState: { opTime: { ts: Timestamp(1524674092, 1), t: 1 } }, $db: "config" } numYields:0 reslen:618 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, Database: { acquireCount: { w: 4, W: 1 } }, Collection: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_msg 219ms 2018-04-25T12:36:44.152-0400 I NETWORK [conn21] Starting new replica set monitor for a/louisamac:9001,louisamac:9002,louisamac:9003 2018-04-25T12:36:44.159-0400 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to louisamac:9002 (1 connections now open to louisamac:9002 with a 5 second timeout) 2018-04-25T12:36:44.159-0400 I NETWORK [conn21] Successfully connected to louisamac:9003 (1 connections now open to louisamac:9003 with a 5 second timeout) 2018-04-25T12:36:44.162-0400 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to louisamac:9001 (1 connections now open to louisamac:9001 with a 5 second timeout) 2018-04-25T12:36:44.163-0400 I ASIO [conn21] Connecting to louisamac:9001 2018-04-25T12:36:44.197-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59177 #28 (12 connections now open) 2018-04-25T12:36:44.197-0400 I NETWORK [conn28] received client metadata from 127.0.0.1:59177 conn28: { driver: { name: "MongoDB Internal Client", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:36:44.206-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59178 #29 (13 connections now open) 2018-04-25T12:36:44.207-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59179 #30 (14 connections now open) 2018-04-25T12:36:44.207-0400 I NETWORK [conn29] received client metadata from 127.0.0.1:59178 conn29: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:36:44.208-0400 I NETWORK [conn30] received client metadata from 127.0.0.1:59179 conn30: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:36:44.214-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59182 #31 (15 connections now open) 2018-04-25T12:36:44.222-0400 I NETWORK [conn31] received client metadata from 127.0.0.1:59182 conn31: { driver: { name: "MongoDB Internal Client", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:36:44.223-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59185 #32 (16 connections now open) 2018-04-25T12:36:44.235-0400 I NETWORK [conn32] received client metadata from 127.0.0.1:59185 conn32: { driver: { name: "MongoDB Internal Client", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:36:44.247-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59187 #33 (17 connections now open) 2018-04-25T12:36:44.248-0400 I NETWORK [conn33] received client metadata from 127.0.0.1:59187 conn33: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:36:44.249-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59189 #34 (18 connections now open) 2018-04-25T12:36:44.250-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59191 #35 (19 connections now open) 2018-04-25T12:36:44.250-0400 I NETWORK [conn34] received client metadata from 127.0.0.1:59189 conn34: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:36:44.251-0400 I NETWORK [listener] connection accepted from 127.0.0.1:59192 #36 (20 connections now open) 2018-04-25T12:36:44.251-0400 I NETWORK [conn35] received client metadata from 127.0.0.1:59191 conn35: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:36:44.261-0400 I NETWORK [conn36] received client metadata from 127.0.0.1:59192 conn36: { driver: { name: "NetworkInterfaceTL", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" } } 2018-04-25T12:37:14.246-0400 I NETWORK [conn21] Removed ReplicaSetMonitor for replica set a 2018-04-25T12:37:14.246-0400 I ASIO [AddShard-TaskExecutor] Ending connection to host louisamac:9001 due to bad connection status; 0 connections to that host remain open 2018-04-25T12:37:14.255-0400 I SHARDING [conn21] addShard request 'AddShardRequest shard: a/louisamac:9001,louisamac:9002,louisamac:9003, name: shard0'failed :: caused by :: OperationFailed: failed to run command { setFeatureCompatibilityVersion: "4.0" } when attempting to add shard a/louisamac:9001,louisamac:9002,louisamac:9003 :: caused by :: NetworkInterfaceExceededTimeLimit: timed out 2018-04-25T12:37:14.266-0400 I COMMAND [conn21] command admin.$cmd appName: "MongoDB Shell" command: _configsvrAddShard { _configsvrAddShard: "a/louisamac:9001,louisamac:9002,louisamac:9003", name: "shard0", writeConcern: { w: "majority", wtimeout: 60000 }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1524674202, 1), signature: { hash: BinData(0, 6A5920CC34EBEB125CD009AED8BD5F519207FFA7), keyId: 6548425113090392089 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.2" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "17.3.0" }, mongos: { host: "louisamac:9010", client: "127.0.0.1:59156", version: "3.7.5-90-g5122108-patch-5addeaf22fbabe5f7d685dea" } }, $configServerState: { opTime: { ts: Timestamp(1524674202, 1), t: 1 } }, $db: "admin" } numYields:0 ok:0 errMsg:"failed to run command { setFeatureCompatibilityVersion: \"4.0\" } when attempting to add shard a/louisamac:9001,louisamac:9002,louisamac:9003 :: caused by :: NetworkInterfaceExceededTimeLimit: timed out" errName:OperationFailed errCode:96 reslen:731 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1, W: 1 } } } protocol:op_msg 30114ms 2018-04-25T12:38:32.089-0400 I CONTROL [thread28] Failed to create config.system.sessions: Cannot create config.system.sessions until there are shards, will try again at the next refresh interval 2018-04-25T12:38:32.091-0400 I CONTROL [thread28] Sessions collection is not set up; waiting until next sessions refresh interval: Cannot create config.system.sessions until there are shards 2018-04-25T12:43:32.093-0400 I SHARDING [thread3] Refreshing chunks for collection config.system.sessions based on version 0|0||000000000000000000000000 2018-04-25T12:43:32.095-0400 I SHARDING [ConfigServerCatalogCacheLoader-1] Refresh for collection config.system.sessions took 2 ms and found the collection is not sharded 2018-04-25T12:43:32.096-0400 I CONTROL [thread3] Failed to create config.system.sessions: Cannot create config.system.sessions until there are shards, will try again at the next refresh interval 2018-04-25T12:43:32.097-0400 I CONTROL [thread3] Sessions collection is not set up; waiting until next sessions refresh interval: Cannot create config.system.sessions until there are shards