-
Type: Bug
-
Resolution: Won't Fix
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
None
-
Storage Execution
-
ALL
-
0
When we do batch inserts, we do a batch insert until we hit an error, then if we do we insert each document in the batch one by one, which is slow. In the presence of errors (e.g. an insert that would violate unique index constraints), then, the larger the batch size the more documents which will have to be written one at a time. And also the more work which will have to be re-done.
We could alleviate this somewhat with a more sophisticated strategy for error handling – e.g. if we got an error during a batch, we could first retry a batch with all the documents which didn't fail, then try a batch with all the documents after the failed document.