Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-72028

E11000 duplicate key error collection: <col name> index: _id_ dup key: { _id: "xxxxxx 2022-12-10" }', details={}}

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 5.0.14
    • Component/s: None
    • ALL

      Hello,
      I have a spring boot application (`2.7.3`), which is using the reactive mongodb driver. The database is `5.0.11-focal` (docker image).
      The problem is that when I execute a query created like below (kotlin)

       

      fun addRequests(requests: List<RequestCountReport>) =
          template.getCollection(REQUEST_COUNT_COLLECTION)
              .flatMapMany { c ->
                  val updates = requests.map { r ->
                      val time = r.time.date()
                      UpdateOneModel<Document>(
                          BasicDBObject(
                              mapOf(
                                  "_id" to r.scope + " " + time,
                                  "scope" to r.scope,
                                  "time" to time,
                              )
                          ),
                          BasicDBObject(
                              mapOf(
                                  "\$inc" to BasicBSONObject(
                                      r.requests.mapKeys { "requests.${it.key}" }
                                  )
                              )
                          ),
                          UpdateOptions().upsert(true)
                      )
                  }
                  c.bulkWrite(updates).toFlux()
              }
              .then() 

       

      (`RequestCountReport` has the following structure)

      data class RequestCountReport(
          val scope: String,
          val time: Temporal,
          val requests: Map<String, Int>,
      ) 

       

      ...which is translated to the following query to mongo

       

      {
        "update": "requestCount",
        "ordered": true,
        "txnNumber": 3,
        "$db": "route",
        "$clusterTime": {
          "clusterTime": {
            "$timestamp": {
              "t": 1670768591,
              "i": 1
            }
          },
          "signature": {
            "hash": {
              "$binary": {
                "base64": "AAAAAAAAAAAAAAAAAAAAAAAAAAA=",
                "subType": "00"
              }
            },
            "keyId": 0
          }
        },
        "lsid": {
          "id": {
            "$binary": {
              "base64": "OdYdXMkcQ+CxD2BbLWRsog==",
              "subType": "04"
            }
          }
        },
        "updates": [
          {
            "q": {
              "_id": "admin 2022-12-11",
              "scope": "admin",
              "time": {
                "$date": "2022-12-11T00:00:00Z"
              }
            },
            "u": {
              "$inc": {
                "requests.here,maptile,road,truck,fleet": 187
              }
            },
            "upsert": true
          }
        ]
      }

       

      it sometimes gives an error like this

      Write errors: [BulkWriteError{index=0, code=11000, message='E11000 duplicate key error collection: route.requestCount index: _id_ dup key: { _id: "xxxxxx 2022-12-10" }', details={}}]. 
          at com.mongodb.internal.connection.BulkWriteBatchCombiner.getError(BulkWriteBatchCombiner.java:167) ~[mongodb-driver-core-4.6.1.jar:na] 

      I initially had a single op write (one for each entry) and it also occurred, but then I could at least retry the given entry write. Now, when it is a bulk, I am not sure even how to do it (something may be saved, it is not in the transaction I assume). Nevertheless, it is a bug IMO

            Assignee:
            yuan.fang@mongodb.com Yuan Fang
            Reporter:
            witkups@gmail.com Witold Kupś
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: