Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-27515

Issue with sharding a huge collection(splitVector timeout)

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.2.3
    • Component/s: Sharding
    • None
    • ALL

      Hi,
      We are facing timeout problem while sharding a quiet big collection.

      sh.shardCollection("mp.AccWiseQty", {"outletId" :1, "variantId":1});
      { "code" : 50, "ok" : 0, "errmsg" : "Operation timed out" }
      

      While inspecting logs,We have found that splitVector command is timing out.

      Mongos Log Snapshot:(SplitVector Timeout)
      2016-12-26T11:59:46.912+0530 D ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 3054972 -- target:81-47-mumbai.justdial.com:26200 db:admin expDate:2016-12-26T11:59:46.912+0530 cmd:{ splitVector: "mp.AccWiseQty", keyPattern: { outletId: 1.0, variantId: 1.0 }, min: { outletId: MinKey, variantId: MinKey }, max: { outletId: MaxKey, variantId: MaxKey }, maxChunkSizeBytes: 67108864, maxSplitPoints: 0, maxChunkObjects: 0 } reason: ExceededTimeLimit: Operation timed out
      

      Collection Details:
      Collection DB:mp
      Collection Name:AccWiseQty
      No.Of Docs:85686646
      Collection Size:
      "size" : 33235056950,
      "avgObjSize" : 387,
      "storageSize" : 9376174080

      Collection Stats file is also attached for details about collection.

      Have attached all possibly required logs by setting log verbose level to 2.
      1.Config server replica set status
      2.Collection stats(for which we are getting error)
      3.Sharding Status
      4.Config server slave replication info
      5.mongos error log
      6.primary shard error log
      7.Collection Stats

      We have gone through following link.
      https://groups.google.com/forum/#!topic/mongodb-user/ozSgkhwPPBQ

      It suggests to dump the data out and re-import it.
      But re-importing data in our case is not feasible as data volume is huge.
      We need solution that will shard the existing huge collection in-place.
      May be some sort of increase in timeout-seconds may help but I dont know where to change.

        1. ShardingStatus.txt
          3 kB
        2. pimaryShard.log
          497 kB
        3. mongos.log
          896 kB
        4. ConfigServerSlaveInfo.txt
          0.3 kB
        5. ConfigReplicaStatus.txt
          2 kB
        6. config.log
          1.79 MB
        7. Collection Stats
          5 kB

            Assignee:
            kaloian.manassiev@mongodb.com Kaloian Manassiev
            Reporter:
            sawantsuraj91@gmail.com Suraj Sawant
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: