Hello
I use a DeepCopy library (https://code.google.com/p/cloning/) in my application to speed-up tremendous amount of pre-constructed objects with a deep hierarchy. However, the ObjectId class really doesn't react well to this behavior.
If I create a "new ObjectId()", the flag "_new" is set to "true". If I store, let's say in a HashMap, my Object containing this Id and I try to SAVE() my object it works for the first time. However, the second time I SAVE() my object using your driver, MongoDB tells me that there's a violation of the Index.
Looking at your source code :
/**
- Saves an object to this collection (does insert or update based on the object _id).
- @param jo the <code>DBObject</code> to save
- @param concern the write concern
- @return
- @throws MongoException
*/
public WriteResult save( DBObject jo, WriteConcern concern ){
if ( checkReadOnly( true ) )
return null;
_checkObject( jo , false , false );
Object id = jo.get( "_id" );
if ( id == null || ( id instanceof ObjectId && ((ObjectId)id).isNew() ) )
{ if ( id != null && id instanceof ObjectId ) ((ObjectId)id).notNew(); if ( concern == null ) return insert( jo ); else return insert( jo, concern ); } DBObject q = new BasicDBObject();
q.put( "_id" , id );
if ( concern == null )
return update( q , jo , true , false );
else
return update( q , jo , true , false , concern );
}
It Looks like since the ObjectId is DeepCopied, the Driver always do an "Insert" instead of an Update : Upsert = True. Shouldn't the "save" method always process to a Upsert instead of a Insert / Update?
I managed a workaround setting my ObjectId to notNew() everytime I create one and "everything" seems to work fine now, the driver doing an Update (Upsert:true).
My Questions : Why do you Insert if the Upsert command works fine? Is there any performance Issue I should be aware of using this "trick"? Will MongoDB handle Upsert poorly compared to "real" inserts?