Uploaded image for project: 'Kafka Connector'
  1. Kafka Connector
  2. KAFKA-96

Source Connector: The resume token UUID does not exist

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • 1.3.0
    • Affects Version/s: None
    • Component/s: Source
    • None
    • Environment:
      Component: Confluent Kafka Connector
      Version: 1.0.1

      MongoDB version: 3.6.16, sharded cluster 

      Component: Confluent Kafka Connector
      Version: 1.0.1

      MongoDB version: 3.6.16, sharded cluster 

      Related tickets: 

      https://jira.mongodb.org/browse/SERVER-32088

      https://jira.mongodb.org/browse/SERVER-32029

      Issue: 

      Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179)

      com.mongodb.MongoCommandException: Command failed with error 40615 (Location40615): 'The resume token UUID does not exist. Has the collection been dropped?' on server xxx. The full response is {"ok": 0.0, "errmsg": "The resume token UUID does not exist. Has the collection been dropped?", "code": 40615, "codeName": "Location40615", "operationTime": {"$timestamp": {"t": 1584614769, "i": 95, "$clusterTime": {"clusterTime": {"$timestamp": {"t": 1584614769, "i": 95}}, "signature": {"hash":

      {"$binary": "2BrRm3m276aXzvoFKW+R/TnxNMU=", "$type": "00"}

      , "keyId": {"$numberLong": "6763316478826514148"}}}}}}

       

      The error happens when a changeStream generated by the Kafka Connector cannot find the collection on the target server via the mongos. This is a known bug which is fixed in MongoDB 4+ as seen in ticket SERVER-32029 and a backport for this will not be implemented. Upgrading to MongoDB 4+ is not an option for the time being due to time constraints

       

      Questions:

      We were wondering if there was a workaround for this specific issue? We use tag based sharding and we shard the collections based on this tag. We have one active shard currently ->  we know which shard is receiving writes and where the data is located. We thought it might be possible to tail the oplog of the shard primary in the active shard using the Kafka Connector, receive the data and dump it to Kafka, thoughts on this / is this even plausible? Is the only way to receive data through mongos using changeStreams initiated by the Kafka Connector when we’re dealing with a sharded MongoDB cluster? 

       

      Workflow:

      MongoDB Shard/ReplicaSet Oplog (current active shard) → KafkaConnector →Kafka Cluster

       

            Assignee:
            ross@mongodb.com Ross Lawley
            Reporter:
            pietro.partescano@adevinta.com Pietro Partescano
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: