Uploaded image for project: 'Spark Connector'
  1. Spark Connector
  2. SPARK-388

Datatype overwritten while insertion into Mongo on using mongo-spark-connector 10.1.0v

    • Type: Icon: Bug Bug
    • Resolution: Fixed
    • Priority: Icon: Major - P3 Major - P3
    • 10.2.0
    • Affects Version/s: 10.1.0
    • Component/s: Writes
    • None
    • Not Needed
    • Hide

      1. What would you like to communicate to the user about this feature?
      2. Would you like the user to see examples of the syntax and/or executable code and its output?
      3. Which versions of the driver/connector does this apply to?

      Show
      1. What would you like to communicate to the user about this feature? 2. Would you like the user to see examples of the syntax and/or executable code and its output? 3. Which versions of the driver/connector does this apply to?

      As an end-user, I want to use the mongo-spark-connector 10.1.0 to write to the mongo collection.

      Issue: While the json collection is being read and is written to MongoDB collection it is changing the default datatype of the schema taken into account when reading the JSON data. 

      As for the issue, it is converting the "string" to "int32" and "long" to "int64", which is causing issues in further aggregation or while reading the record from the collection.

      How to reproduce:  code snippet

       

       

      Expectation:

      The change of datatype should not happen while writing to the mongo collection.

        1. image-2023-02-06-10-13-27-346.png
          image-2023-02-06-10-13-27-346.png
          77 kB
        2. screenshot-1.png
          screenshot-1.png
          20 kB

            Assignee:
            ross@mongodb.com Ross Lawley
            Reporter:
            himanshu.yadav0807@gmail.com deadpool N/A
            Votes:
            1 Vote for this issue
            Watchers:
            12 Start watching this issue

              Created:
              Updated:
              Resolved: