Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-3126

replset - common point problem

    • Type: Icon: Task Task
    • Resolution: Done
    • Priority: Icon: Critical - P2 Critical - P2
    • None
    • Affects Version/s: 1.8.1
    • Component/s: Replication
    • None
    • Environment:
      as in SERVER-3125

      after changing oplog size in in config and restarting mongodb (in order: node no 1 - secondary, node no 2 - secondary, node no 3 - primary)

      Set name: testreplset
      Majority up: yes
      Member id Up cctime Last heartbeat Votes Priority State Messages optime skew
      172.17.0.251:27017 0 1 3.3 mins 1 sec ago 1 1 RECOVERING error RS102 too stale to catch up 4dd6539b:81
      172.17.0.252:27017 1 1 3.3 mins 1 sec ago 1 1 PRIMARY 4dd6546d:7c
      172.17.0.253:27017 (me) 2 1 3.4 mins 1 1 ROLLBACK rollback 2 error findcommonpoint waiting a while before trying again 4dd65853:8

      Recent replset log activity:

      Fri May 20 14:15:09 [startReplSets] couldn't connect to localhost:27017: couldn't connect to server localhost:27017
      14:15:09 [startReplSets] replSet STARTUP2
      14:15:09 [replica set sync] replSet SECONDARY
      14:15:09 [rs Manager] replSet can't see a majority, will not try to elect self
      14:15:09 [ReplSetHealthPollTask] replSet info 172.17.0.252:27017 is up
      14:15:09 [ReplSetHealthPollTask] replSet member 172.17.0.252:27017 PRIMARY
      14:15:11 [ReplSetHealthPollTask] replSet info 172.17.0.251:27017 is up
      14:15:11 [ReplSetHealthPollTask] replSet member 172.17.0.251:27017 RECOVERING
      14:15:12 [replica set sync] replSet we are ahead of the primary, will try to roll back
      14:15:12 [replica set sync] replSet rollback 0
      14:15:12 [replica set sync] replSet ROLLBACK
      14:15:12 [replica set sync] replSet rollback 1
      14:15:12 [replica set sync] replSet rollback 2 FindCommonPoint
      14:15:12 [replica set sync] replSet info rollback our last optime: May 20 14:02:27:8
      14:15:12 [replica set sync] replSet info rollback their last optime: May 20 13:45:49:7c
      14:15:12 [replica set sync] replSet info rollback diff in end of log times: 998 seconds
      14:15:12 [replica set sync] replSet rollback error RS101 reached beginning of local oplog
      14:15:12 [replica set sync] replSet them: 172.17.0.252:27017 scanned: 58620
      14:15:12 [replica set sync] replSet theirTime: May 20 13:45:49 4dd6546d:7c
      14:15:12 [replica set sync] replSet ourTime: May 20 13:57:33 4dd6572d:1dd
      14:15:12 [replica set sync] replSet rollback 2 error RS101 reached beginning of local oplog [2]
      14:15:25 [replica set sync] replSet we are ahead of the primary, will try to roll back
      14:15:25 [replica set sync] replSet rollback 0
      14:15:25 [replica set sync] replSet rollback 1
      14:15:25 [replica set sync] replSet rollback 2 FindCommonPoint
      14:15:25 [replica set sync] replSet rollback 2 error findcommonpoint waiting a while before trying again
      14:15:39 .....
      14:15:52 .....
      14:16:05 .....
      14:16:18 [replica set sync] replSet we are ahead of the primary, will try to roll back
      14:16:18 [replica set sync] replSet rollback 0
      14:16:18 [replica set sync] replSet rollback 1
      14:16:18 [replica set sync] replSet rollback 2 FindCommonPoint
      14:16:18 [replica set sync] replSet info rollback our last optime: May 20 14:02:27:8
      14:16:18 [replica set sync] replSet info rollback their last optime: May 20 13:45:49:7c
      14:16:18 [replica set sync] replSet info rollback diff in end of log times: 998 seconds
      14:16:18 [replica set sync] replSet rollback error RS101 reached beginning of local oplog
      14:16:18 [replica set sync] replSet them: 172.17.0.252:27017 scanned: 58620
      14:16:18 [replica set sync] replSet theirTime: May 20 13:45:49 4dd6546d:7c
      14:16:18 [replica set sync] replSet ourTime: May 20 13:57:33 4dd6572d:1dd
      14:16:18 [replica set sync] replSet rollback 2 error RS101 reached beginning of local oplog [2]
      14:16:32 [replica set sync] replSet we are ahead of the primary, will try to roll back
      14:16:32 [replica set sync] replSet rollback 0
      14:16:32 [replica set sync] replSet rollback 1
      14:16:32 [replica set sync] replSet rollback 2 FindCommonPoint
      14:16:32 [replica set sync] replSet rollback 2 error findcommonpoint waiting a while before trying again
      14:16:45 .....
      14:16:58 .....
      14:17:11 .....
      14:17:24 [replica set sync] replSet we are ahead of the primary, will try to roll back
      14:17:24 [replica set sync] replSet rollback 0
      14:17:24 [replica set sync] replSet rollback 1
      14:17:24 [replica set sync] replSet rollback 2 FindCommonPoint
      14:17:24 [replica set sync] replSet info rollback our last optime: May 20 14:02:27:8
      14:17:24 [replica set sync] replSet info rollback their last optime: May 20 13:45:49:7c
      14:17:24 [replica set sync] replSet info rollback diff in end of log times: 998 seconds
      14:17:25 [replica set sync] replSet rollback error RS101 reached beginning of local oplog
      14:17:25 [replica set sync] replSet them: 172.17.0.252:27017 scanned: 58620
      14:17:25 [replica set sync] replSet theirTime: May 20 13:45:49 4dd6546d:7c
      14:17:25 [replica set sync] replSet ourTime: May 20 13:57:33 4dd6572d:1dd
      14:17:25 [replica set sync] replSet rollback 2 error RS101 reached beginning of local oplog [2]
      14:17:38 [replica set sync] replSet we are ahead of the primary, will try to roll back
      14:17:38 [replica set sync] replSet rollback 0
      14:17:38 [replica set sync] replSet rollback 1
      14:17:38 [replica set sync] replSet rollback 2 FindCommonPoint
      14:17:38 [replica set sync] replSet rollback 2 error findcommonpoint waiting a while before trying again
      14:17:51 .....
      14:18:04 .....

            Assignee:
            greg.mckeon@mongodb.com Gregory McKeon (Inactive)
            Reporter:
            msz MartinS
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

              Created:
              Updated:
              Resolved: