Uploaded image for project: 'Spark Connector'
  1. Spark Connector
  2. SPARK-150

Add support for Compound Shard Keys.

    • Type: Icon: New Feature New Feature
    • Resolution: Fixed
    • Priority: Icon: Major - P3 Major - P3
    • 2.2.2, 2.1.2
    • Affects Version/s: 1.1.0
    • Component/s: Partitioners
    • None

      Create a MongoCompoundKeyShardedPartitioner.


      Was: MongoShardedPartitioner does not support Compound shard keys

      MongoShardedPartitioner does not seem to support compound shard keys
      I am trying to use MongoShardedPartitioner with my SPARK job, to read data from MONGO but unable to make it work.

      My Compound Shard Key ==>
      "key" :

      { "shardkey_type" : 1, "shardkey_value" : 1 }

      ,

      Now what should i pass to shardKey partitionerOptions in ReadConfig?
      Looking at the source code's generatePartitions method, it expects a shardKey and use it to generate a boundary query but in case of compound query am unable to make it work and believe (based on the source code) it doesn't support COMPOUND SHARD KEYs ==>

      private[partitioner] def generatePartitions(chunks: Seq[BsonDocument], shardKey: String, shardsMap: Map[String, Seq[String]]): Array[MongoPartition] = {
      chunks.zipWithIndex.map(

      { case (chunk: BsonDocument, i: Int) => MongoPartition( i, PartitionerHelper.createBoundaryQuery( shardKey, chunk.getDocument("min").get(shardKey), chunk.getDocument("max").get(shardKey) ), shardsMap.getOrElse(chunk.getString("shard").getValue, Nil) ) }

      ).toArray
      }

      def createBoundaryQuery(key: String, lower: BsonValue, upper: BsonValue): BsonDocument =
      new BsonDocument(key, new BsonDocument("$gte", lower).append("$lt", upper))

      Please advise and help.

            Assignee:
            ross@mongodb.com Ross Lawley
            Reporter:
            snpmg2 Sandeep Maggon
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: