Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-18096

Shard primary incorrectly reuses closed sockets after relinquish and re-election

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • 2.6.10
    • Affects Version/s: 2.6.9
    • Component/s: Networking
    • None
    • Fully Compatible
    • ALL
    • Sharding 3 05/15/15

      When a shard primary relinquishes, it closes all incoming — and outgoing — connections. This is normal and necessary. However, if it later becomes primary again, it will incorrectly try to reuse the (now closed) outgoing sockets to the configsvrs and other shards members (ReplicaSetMonitorWatcher).

      Since these fds have been closed and are no longer valid, this causes a profusion of "Bad file descriptor" (errno = EBADF) messages in the logfile. However, the connections are not automatically re-established, causing subsequent chunk migrations to fail (and probably other operations that require the shards to write to the configsvrs).

      The actual impact depends on whether the FROM or TO shard has "bounced" (step-down/step-up).

      • FROM shard bounce => next 4 migrations fail
      • TO shard bounce => next 3 migrations fail
      • FROM and TO shard bounce => next 8 migrations fail

      Initially the failures are early in the migration process, but subsequent migrations fail later in the process — notably, after documents have been transferred (causing orphaned documents). In some of these failures, SERVER-17066 means that the resulting orphans cannot be cleaned by cleanupOrphaned.

      Attached are a jstest reproducer and wrapper script suitable for "git bisect run".

      This only affects 2.6; it has been incidentally fixed in 3.0. Using git bisect shows that commit fbbb0d2a1d845728cd714272199a652573e2f27d (SERVER-15593) fixed the issue. However, that ticket is different and the bulk of the commit is completely unrelated.

      I have confirmed that the following hunk alone is sufficient to fix the problem:

      Unable to find source-code formatter for language: diff. Available languages are: actionscript, ada, applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml
      diff --git a/src/mongo/util/net/sock.cpp b/src/mongo/util/net/sock.cpp
      index 8e9517f..e649a43 100644
      --- a/src/mongo/util/net/sock.cpp
      +++ b/src/mongo/util/net/sock.cpp
      @@ -824,6 +824,12 @@ namespace mongo {
           // isStillConnected() polls the socket at max every Socket::errorPollIntervalSecs to determine
           // if any disconnection-type events have happened on the socket.
           bool Socket::isStillConnected() {
      +        if (_fd == -1) {
      +            // According to the man page, poll will respond with POLLVNAL for invalid or
      +            // unopened descriptors, but it doesn't seem to be properly implemented in
      +            // some platforms - it can return 0 events and 0 for revent. Hence this workaround.
      +            return false;
      +        }
      
               if ( errorPollIntervalSecs < 0 ) return true;
               if ( ! isPollSupported() ) return true; // nothing we can do
      

      Given that this is a very simple fix for a logic bug of moderately high impact, can this please be backported to the v2.6 branch?

        1. shard_primary_relinquish_migrate.js
          4 kB
          Kevin Pulo
        2. shard_primary_relinquish_migrate.sh
          0.6 kB
          Kevin Pulo

            Assignee:
            kevin.pulo@mongodb.com Kevin Pulo
            Reporter:
            kevin.pulo@mongodb.com Kevin Pulo
            Votes:
            1 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: