Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-29603

Dropping indexes sometimes locks our entire cluster

    • Type: Icon: Bug Bug
    • Resolution: Incomplete
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.2.13, 3.4.4
    • Component/s: WiredTiger
    • None
    • Environment:
      3.4.4 sharded cluster with 18 shards, each consisting of 1 replica, 1 primary, and 1 hidden replica. 3 config servers (CSRS) and 5 mongoS
    • ALL
      1. Drop indexes
      2. See if reads/writes are blocked

      We have run into a situation where we have the same collection on many shards, but the definition of one of our indexes in this collection varies from shard to shard. In one shard it is sparse and in the other it is not.
      While this was our fault, this state blocks these collections from being balanced, as the balancer throws this error:
      "failed to create index before migrating data. error: IndexOptionsConflict: Index with name: eIds.tId_1_eIds.v_-1 already exists with different options"

      We are trying to remedy this by just dropping the index entirely and creating the index in the background (as a side note, why on earth do indexes create in the foreground by default).
      This does work, however we have to repair something like 100 collections, and twice now within the first 5 drops, one of these drop indexes will completely lock up our system.

      I will try to reproduce this locally without the full production environment in an effort to see if this problem is tied to the indexes not matching between shards.

            Assignee:
            kelsey.schubert@mongodb.com Kelsey Schubert
            Reporter:
            glajchs Scott Glajch
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: