-
Type: Bug
-
Resolution: Incomplete
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: Internal Client
-
ALL
This is a question from one of our customers. Sorry for the late notice, we have a meeting with them at 2pm today.
I had a question (possibly bug report) about the MongoDB C++ driver. We wanted to switch from using:
insert(const string &ns, BSONObj obj, int flags)
to the "batched" version:
insert(const string &ns, const vector<BSONObj> &v, int flags)
for performance reasons.
However, we hit an issue pretty quickly. Under the covers, the version which takes the vector uses a BufBuilder to build up the request to send. In versions 2.0.x of the API, the growth algorithm for the buffer inside BufBuilder means it does not grow in a particularly predictable fashion and we found ourselves exceeding the buffer's 64MB limit pretty consistently. Consequently, we switched to using BufBuilder directly ourselves and effectively reverse engineering the growth algorithm to make sure we stuffed as much into it as possible.
I see the growth algorithm has been changed in 2.1.0 to ensure the buffer's size grows in powers of two which means I can remove the reverse engineering from our code.
However, there seems to be a loophole. The BufBuilder constructor takes an int argument which is the buffer's initial size and there is no maximum size validation on this. Therefore, I could create it to be larger than 64MB and as long as my appends never cause it to grow, it won't throw any errors about the size.
Is this loophole intentional or is this a bug?
What are the potential downsides to us exploiting it?