Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-27138

Cannot run replSetStepDown for a 3.4 replica set with journaling disabled

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.4.0-rc4
    • Component/s: Replication
    • None
    • Replication
    • ALL

      1. Start with a 3 node replica set in which all nodes are wiredTiger with journaling disabled
      2. Make a single write
      3. Attempt to step down the primary

      When attempting the step down command you will get:

      backup_test:PRIMARY> rs.stepDown()
      {
      	"ok" : 0,
      	"errmsg" : "No electable secondaries caught up as of 2016-11-20T21:56:10.466+0000. Please use {force: true} to force node to step down.",
      	"code" : 50,
      	"codeName" : "ExceededTimeLimit"
      }
      

      The rs.status() looks as below. Not that all the optimeDurable are undefined:

      backup_test:PRIMARY> rs.status()
      {
      	"set" : "backup_test",
      	"date" : ISODate("2016-11-20T21:55:49.329Z"),
      	"myState" : 1,
      	"term" : NumberLong(2),
      	"heartbeatIntervalMillis" : NumberLong(2000),
      	"optimes" : {
      		"lastCommittedOpTime" : {
      			"ts" : Timestamp(0, 0),
      			"t" : NumberLong(-1)
      		},
      		"appliedOpTime" : {
      			"ts" : Timestamp(1479678946, 1),
      			"t" : NumberLong(2)
      		},
      		"durableOpTime" : {
      			"ts" : Timestamp(0, 0),
      			"t" : NumberLong(-1)
      		}
      	},
      	"members" : [
      		{
      			"_id" : 0,
      			"name" : "cailinmac:27000",
      			"health" : 1,
      			"state" : 2,
      			"stateStr" : "SECONDARY",
      			"uptime" : 7,
      			"optime" : {
      				"ts" : Timestamp(1479678946, 1),
      				"t" : NumberLong(2)
      			},
      			"optimeDurable" : {
      				"ts" : Timestamp(1479678888, 1),
      				"t" : NumberLong(1)
      			},
      			"optimeDate" : ISODate("2016-11-20T21:55:46Z"),
      			"optimeDurableDate" : ISODate("2016-11-20T21:54:48Z"),
      			"lastHeartbeat" : ISODate("2016-11-20T21:55:48.020Z"),
      			"lastHeartbeatRecv" : ISODate("2016-11-20T21:55:46.518Z"),
      			"pingMs" : NumberLong(0),
      			"syncingTo" : "cailinmac:27020",
      			"configVersion" : 1
      		},
      		{
      			"_id" : 1,
      			"name" : "cailinmac:27010",
      			"health" : 1,
      			"state" : 1,
      			"stateStr" : "PRIMARY",
      			"uptime" : 140,
      			"optime" : {
      				"ts" : Timestamp(1479678946, 1),
      				"t" : NumberLong(2)
      			},
      			"optimeDate" : ISODate("2016-11-20T21:55:46Z"),
      			"infoMessage" : "could not find member to sync from",
      			"electionTime" : Timestamp(1479678905, 1),
      			"electionDate" : ISODate("2016-11-20T21:55:05Z"),
      			"configVersion" : 1,
      			"self" : true
      		},
      		{
      			"_id" : 2,
      			"name" : "cailinmac:27020",
      			"health" : 1,
      			"state" : 2,
      			"stateStr" : "SECONDARY",
      			"uptime" : 120,
      			"optime" : {
      				"ts" : Timestamp(1479678946, 1),
      				"t" : NumberLong(2)
      			},
      			"optimeDurable" : {
      				"ts" : Timestamp(0, 0),
      				"t" : NumberLong(-1)
      			},
      			"optimeDate" : ISODate("2016-11-20T21:55:46Z"),
      			"optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
      			"lastHeartbeat" : ISODate("2016-11-20T21:55:47.952Z"),
      			"lastHeartbeatRecv" : ISODate("2016-11-20T21:55:48.912Z"),
      			"pingMs" : NumberLong(0),
      			"syncingTo" : "cailinmac:27010",
      			"configVersion" : 1
      		}
      	],
      	"ok" : 1
      }
      

      Here are the command line options for a member. If I enable journaling, this problem disappears.

      backup_test:PRIMARY> db.runCommand({getCmdLineOpts:1})
      {
      	"argv" : [
      		"mongod",
      		"--port=27000",
      		"--replSet=backup_test",
      		"--dbpath=/Users/cailin/Documents/code/mms/data/db/replica/backup_test/backup_test_0",
      		"--logpath=/Users/cailin/Documents/code/mms/data/db/replica/backup_test/backup_test_0/mongodb.log",
      		"--logappend",
      		"--oplogSize=100",
      		"--storageEngine=wiredTiger",
      		"--nojournal",
      		"--wiredTigerEngineConfigString=cache_size=512MB"
      	],
      	"parsed" : {
      		"net" : {
      			"port" : 27000
      		},
      		"replication" : {
      			"oplogSizeMB" : 100,
      			"replSet" : "backup_test"
      		},
      		"storage" : {
      			"dbPath" : "/Users/cailin/Documents/code/mms/data/db/replica/backup_test/backup_test_0",
      			"engine" : "wiredTiger",
      			"journal" : {
      				"enabled" : false
      			},
      			"wiredTiger" : {
      				"engineConfig" : {
      					"configString" : "cache_size=512MB"
      				}
      			}
      		},
      		"systemLog" : {
      			"destination" : "file",
      			"logAppend" : true,
      			"path" : "/Users/cailin/Documents/code/mms/data/db/replica/backup_test/backup_test_0/mongodb.log"
      		}
      	},
      	"ok" : 1
      }
      

            Assignee:
            backlog-server-repl [DO NOT USE] Backlog - Replication Team
            Reporter:
            cailin.nelson@mongodb.com Cailin Nelson (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            14 Start watching this issue

              Created:
              Updated:
              Resolved: