In Spark 2.0 Spark Session is going to become the main entry point for Spark[1]. In that context no longer makes sense having MongoRDD as the main entrance for configuring / customising the connector. Instead the plan is to create a MongoSpark case class and companion to fulfil this role. This simplify any future upgrading of the connector.
Also, an added bonus, is it will allow for the removal of most the `java.api`. Only the Java Bean FieldTypes will still be needed for Java users.
[1] http://blog.madhukaraphatak.com/introduction-to-spark-two-part-1/