-
Type: Bug
-
Resolution: Fixed
-
Priority: Major - P3
-
Affects Version/s: 2.6.0
-
Component/s: None
-
None
-
(copied to CRM)
When the ruby driver is connected to the cluster using SSL and is reading large documents, AND there is a large number of classes and/or modules in the program, the driver performs poorly.
Test case: https://github.com/p-mongo/tests/blob/master/ssl-perf/ssl_perf.rb
After 100 classes are defined with 100 modules included in each class, find performance drops by 50% on my machine against a local single mongod with SSL. Without SSL performance stays the same.
The culprit seems to be OpenSSL socket reads. It appears that the reads themselves happen in 16 kb buffers, which means 1000 reads are needed to retrieve a single document. Each of those reads appears to allocate a buffer. I am not clear on whether the allocated buffers are 16 kb to match the amount of data read or 15 mb to match the expected document size, but regardless, it seems that the time to allocate each buffer is proportional to the number of objects allocated in total and having a large number of classes/modules defined in the program makes these allocations take a really long time.
One possible solution could be to have the ruby driver allocate the buffer since it knows the size required, and have openssl write to that buffer. This should be relatively straightforward to implement assuming openssl can write to provided buffer without allocating its own memory. A different solution may be to maintain a fixed buffer in the driver reused across different documents. Yet another solution could be to increase openssl's internal buffer size or make it allocate memory based on expected read size.
- related to
-
PYTHON-413 MemoryError while retrieving large cursors
- Closed
-
PYTHON-1513 PyMongo inefficiently reads large messages off the network
- Closed