-
Type: Improvement
-
Resolution: Unresolved
-
Priority: Unknown
-
Affects Version/s: None
-
Component/s: None
-
None
Context
Depending on how "large" the large document is, one gets 3 different exceptions. All of the following are > maxBsonSize:
```
large = {"large": "1" * 1024 * 1024 * 16} # About maxBsonObjectSize. larger than the document size limit but under the command size limit
xlarge = {"xlarge": "1" * 1024 * 1024 * 18} # Over maxBsonObjectSize. document larger than the command size limit:
xxl = {"xxl": "1" * 1024 * 1024 * 48} # Over max_message_size: 48_000_000 <== Only this one throws DocumentTooLarge
The errors for the are the following
..
insert_many([large, dict()])type(eDocDict16)=<class 'pymongo.errors.BulkWriteError'>. errmsg: object to insert too large. size in bytes: 16777250, max size: 16777216
insert_many([xlarge, dict()]): type(eDocDict18)=<class 'pymongo.errors.OperationFailure'>. errmsg: BSONObj size: 18874403 (0x1200023) is invalid. Size must be between 0 and 16793600(16MB) First element: _id: ObjectId('65734a221a22a6ca192d74e0')
insert_many([xxl, dict()]): type(eDocDict48)=<class 'pymongo.errors.DocumentTooLarge'>. errmsg: BSON document too large (50331680 bytes) - the connected server supports BSON document sizes up to 16777216 bytes.
..
Definition of done
- Triage. What would we prefer the behavior to be?
- Make these consistent.
- Create unit test(s)
Pitfalls
What should the implementer watch out for? What are the risks?
- related to
-
PYTHON-1943 PyMongo does not validate bson document size in OP_MSG bulk writes
- Backlog
-
PYTHON-1366 About DocumentTooLarge
- Backlog