Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-11843

Removal of replica-set members doesn't work for sharded DB when configServers are down

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Critical - P2 Critical - P2
    • None
    • Affects Version/s: 2.4.6
    • Component/s: Sharding
    • Environment:
      CentOS
    • Linux
    • Hide

      Provided in description summary

      Show
      Provided in description summary

      We have Geo-redundancy setup for sharded database with below Configuration:

      Site-1

      Shard#1 - set01 - (Host1: member#1- Primary DB, Host2:member#2- Secondary DB)
      Shard#2 - set02 - (Host3:member#1- Primary DB, Host4: member#2- Secondary DB)
      Shard#3 - set03 - (Host5: member#1- Primary DB, Host6: member#2- Secondary DB)

      Host7: Config Server1
      Host8: Config Server2
      Host9: Arbiter

      Site-2
      Shard#1 - set01 - (Host1:member#3- Secondary DB, Host2: member#4- Secondary DB)
      Shard#2 - set02 - (Host3:member#3- Secondary DB, Host4: member#4- Secondary DB)
      Shard#3 - set03 - (Host5:member#3- Secondary DB, Host6: member#4- Secondary DB)
      Host7: Config Server3

      Issue:
      When we have entire site down, then in-case such we remove all the failed members of site-1.
      However, as when site-1 is completely down it mean both the config servers also goes down.
      Now, the issue is the removal of members of site1 doesn't gets updated in config server metadata.
      When we execute sh.status still we see the all the members of replica-set.
      As an work around we bring up the config server and then again removed the members, then only it updated the metadata.
      Is this limitation or bug with mongo which requires all the config server to be up and running to remove the replica-set members.

            Assignee:
            Unassigned Unassigned
            Reporter:
            kthummur Krishnachaitanya Thummuru
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: