Uploaded image for project: 'Spark Connector'
  1. Spark Connector
  2. SPARK-175

Using Java driver 3.6.3 with Spark Connector 2.2.1 test suite fails

    • Type: Icon: Bug Bug
    • Resolution: Fixed
    • Priority: Icon: Major - P3 Major - P3
    • 2.2.2
    • Affects Version/s: None
    • Component/s: None
    • None

       - should round trip all bson types
      [info] - should be able to cast all types to a string value *** FAILED ***
      [info]   org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 74.0 failed 1 times, most recent failure: Lost task 0.0 in stage 74.0 (TID 83, localhost, executor driver): org.bson.BsonInvalidOperationException: Invalid state INITIAL
      [info] 	at org.bson.json.StrictCharacterStreamJsonWriter.checkPreconditions(StrictCharacterStreamJsonWriter.java:352)
      [info] 	at org.bson.json.StrictCharacterStreamJsonWriter.writeNull(StrictCharacterStreamJsonWriter.java:183)
      [info] 	at org.bson.json.JsonNullConverter.convert(JsonNullConverter.java:25)
      [info] 	at org.bson.json.JsonNullConverter.convert(JsonNullConverter.java:22)
      [info] 	at org.bson.json.JsonWriter.doWriteNull(JsonWriter.java:204)
      [info] 	at org.bson.AbstractBsonWriter.writeNull(AbstractBsonWriter.java:557)
      [info] 	at org.bson.codecs.BsonNullCodec.encode(BsonNullCodec.java:38)
      [info] 	at org.bson.codecs.BsonNullCodec.encode(BsonNullCodec.java:28)
      [info] 	at org.bson.codecs.EncoderContext.encodeWithChildContext(EncoderContext.java:91)
      [info] 	at org.bson.codecs.BsonValueCodec.encode(BsonValueCodec.java:62)
      [info] 	at com.mongodb.spark.sql.BsonValueToJson$.apply(BsonValueToJson.scala:29)
      [info] 	at com.mongodb.spark.sql.MapFunctions$.bsonValueToString(MapFunctions.scala:103)
      [info] 	at com.mongodb.spark.sql.MapFunctions$.com$mongodb$spark$sql$MapFunctions$$convertToDataType(MapFunctions.scala:78)
      [info] 	at com.mongodb.spark.sql.MapFunctions$$anonfun$3.apply(MapFunctions.scala:39)
      [info] 	at com.mongodb.spark.sql.MapFunctions$$anonfun$3.apply(MapFunctions.scala:37)
      [info] 	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
      [info] 	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
      [info] 	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
      [info] 	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
      [info] 	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
      [info] 	at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
      [info] 	at com.mongodb.spark.sql.MapFunctions$.documentToRow(MapFunctions.scala:37)
      [info] 	at com.mongodb.spark.sql.MongoRelation$$anonfun$buildScan$2.apply(MongoRelation.scala:45)
      [info] 	at com.mongodb.spark.sql.MongoRelation$$anonfun$buildScan$2.apply(MongoRelation.scala:45)
      [info] 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
      [info] 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
      [info] 	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
      [info] 	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
      [info] 	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$9$$anon$1.hasNext(WholeStageCodegenExec.scala:461)
      [info] 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253)
      [info] 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
      [info] 	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:828)
      [info] 	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:828)
      [info] 	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      [info] 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
      [info] 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
      [info] 	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      [info] 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
      [info] 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
      [info] 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
      [info] 	at org.apache.spark.scheduler.Task.run(Task.scala:109)
      [info] 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
      [info] 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      [info] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      [info] 	at java.lang.Thread.run(Thread.java:748)
      

            Assignee:
            ross@mongodb.com Ross Lawley
            Reporter:
            ross@mongodb.com Ross Lawley
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: