Uploaded image for project: 'Spark Connector'
  1. Spark Connector
  2. SPARK-256

count sharding collection document number less than aggregate by mongo shell

    • Type: Icon: Bug Bug
    • Resolution: Cannot Reproduce
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 2.2.7
    • Component/s: Partitioners
    • Environment:
      spark 2.2.0
      mongodb 3.6.2
      mongodb cluster: sharding cluster

      Cluster deployment mode is a sharding cluster. Collection shard by _id hash
      Use the latest Spark Connector version 2.2.7.
      Mongodb version is 3.6.2

      count in spark application:(Use uefault partitioner MongoSamplePartitioner,)

          val count = spark.read.mongo(defaultConfig).count

      the result count is 129800000 less than aggregate by mongo shell result 133301343.

            Assignee:
            ross@mongodb.com Ross Lawley
            Reporter:
            pokerwu.work@gmail.com ye ye
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: