The spec says that drivers should only consider the "body" when sizing batches. For example, that means that for {delete: 'db.collection', deletes: [{q:{query: 1}, limit:0}]}}} only the size of {query: 1} counts to the size of the batch. The problem is that if you do 1000 (max ops per batch) deletes in a batch, the overhead that doesn't count for max batch size adds up to more than the difference between 16MB and BSONObjMaxInternalSize. This leads to an assert on the server for what is specified to be a valid operation. The assert is in a place that the server decides that there is irreparable network corruption and closes the connection.
I spoke with greg_10gen and we think the best solution would be to change the write-commands spec to say that either the entire command object with all overhead must fit under 16MB or it must only have a single op in the operations array.
- is related to
-
SERVER-13180 upgradeCheck hits BSONObj limit when trying to check indexkey size
- Closed