-
Type: Bug
-
Resolution: Cannot Reproduce
-
Priority: Major - P3
-
None
-
Affects Version/s: 3.4.2
-
Component/s: Replication
-
ALL
Replica set with 2 nodes and an arbiter with one node down
When performing a w:"majority" write that would otherwise fail with Duplicate Key Exception, I expect it to fail with Write Concern Timeout exception after the timeout has expired. However, it still produces the dup key exception after the wait:
replset:PRIMARY> db.test.insertOne({_id:1}) { "acknowledged" : true, "insertedId" : 1 } replset:PRIMARY> db.test.insertOne({_id:1},{writeConcern:{w:"majority",wtimeout:5000}}) ... WAIT ... 2017-04-26T18:09:57.685-0400 E QUERY [thread1] WriteError: E11000 duplicate key error collection: test.test index: _id_ dup key: { : 1.0 } : WriteError({ "index" : 0, "code" : 11000, "errmsg" : "E11000 duplicate key error collection: test.test index: _id_ dup key: { : 1.0 }", "op" : { "_id" : 1 } }) WriteError@src/mongo/shell/bulk_api.js:469:48 Bulk/mergeBatchResults@src/mongo/shell/bulk_api.js:836:49 Bulk/executeBatch@src/mongo/shell/bulk_api.js:906:13 Bulk/this.execute@src/mongo/shell/bulk_api.js:1150:21 DBCollection.prototype.insertOne@src/mongo/shell/crud_api.js:242:9 @(shell):1:1 replset:PRIMARY> rs.status() { "set" : "replset", "date" : ISODate("2017-04-26T22:10:05.449Z"), "myState" : 1, "term" : NumberLong(3), "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1493242117, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1493244598, 1), "t" : NumberLong(3) }, "durableOpTime" : { "ts" : Timestamp(1493244598, 1), "t" : NumberLong(3) } }, "members" : [ { "_id" : 0, "name" : "AD-MAC10G.local:27017", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDurable" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2017-04-26T22:10:04.105Z"), "lastHeartbeatRecv" : ISODate("2017-04-26T21:28:42.092Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "Connection refused", "configVersion" : -1 }, { "_id" : 1, "name" : "AD-MAC10G.local:27018", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 2740, "optime" : { "ts" : Timestamp(1493244598, 1), "t" : NumberLong(3) }, "optimeDate" : ISODate("2017-04-26T22:09:58Z"), "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1493244557, 1), "electionDate" : ISODate("2017-04-26T22:09:17Z"), "configVersion" : 1, "self" : true }, { "_id" : 2, "name" : "AD-MAC10G.local:27019", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 57, "lastHeartbeat" : ISODate("2017-04-26T22:10:03.982Z"), "lastHeartbeatRecv" : ISODate("2017-04-26T22:10:02.801Z"), "pingMs" : NumberLong(0), "configVersion" : 1 } ], "ok" : 1 } replset:PRIMARY>
This makes it impossible to distinguish the write concern failure.
- is related to
-
SERVER-29263 Shell bulk api hides write concern errors when there is a write error and the bulk write is 'ordered'
- Closed