Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-8189

Unstable connectivity in replica sets environment

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 2.2.2
    • Component/s: Replication
    • None
    • Environment:
      Debian: Linux elba-mongo3 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux
    • Linux

      Hello! We have production setup of three nodes in replica sets, elba-mongo1 on one rack (the primary) and elba-mongo2, elba-mongo3 on another (secondaries).
      Periodically in our mongo logs we see something similar to:

      Wed Jan 16 11:59:13 [rsBackgroundSync] replSet syncing to: elba-mongo1:27017
      Wed Jan 16 11:59:16 [rsHealthPoll] DBClientCursor::init call() failed
      Wed Jan 16 11:59:16 [rsHealthPoll] replSet info elba-mongo1:27017 is down (or slow to respond): DBClientBase::findN: transport error: elba-mongo1:27017 ns: admin.$cmd query: { replSetHeartbeat: "rs0", v: 6, pv: 1, checkEmpty: false, from: "elba-mongo3:27017", $auth: {} }

      After this point there is an election of new primary. elba-mongo1 pinged successfully from both elba-mongo2 and 3, but we can't connect to elba-mongo1 instance from elba-moongo1 machine using mongo console - it just hangs on "connecting to":

      Croot@elba-mongo1:~# telnet localhost 27017
      Trying 127.0.0.1...
      Connected to localhost.
      Escape character is '^]'.
      ^]
      telnet> quit
      Connection closed.
      root@elba-mongo1:~# mongo
      MongoDB shell version: 2.2.2
      connecting to: test

        1. log
          87 kB
        2. mongo1.log
          45 kB
        3. mongo2.log
          134 kB
        4. mongo3.log
          114 kB
        5. rsStatus.txt
          6 kB
        6. strace.out
          138 kB

            Assignee:
            tad Tad Marshall
            Reporter:
            gusev_p Gusev Petr
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: