-
Type: Bug
-
Resolution: Fixed
-
Priority: Major - P3
-
Affects Version/s: 2.12.0.rc0
-
Component/s: Wire Protocol
-
Environment:Tested against MongoDB 3.6, but likely affects later versions as well.
Tested on versions 2.7.1 and 2.12.0.rc0 of the driver.
Similar to this Python driver issue: https://jira.mongodb.org/browse/PYTHON-2055
When enabling the zlib compressor and doing a large bulk write operation, the operation fails with `Mongo::Error::SocketError: EOFError: end of file reached`. In the server logs, we can see:
2020-04-21T09:02:01.002+0000 I NETWORK [conn330449] DBException handling request, closing client connection: BadValue: Decompressed message would be larger than maximum message size
To reproduce, use something like this:
op = { :update_one => { filter: {_id: 'test' }, update: { test: true, data: ('*' * 1000 * 1000) } } }; nil ops = [op]*48; nil ops.to_bson.length # 48004027 collection.bulk_write(ops) # Mongo::Error::SocketError: EOFError: end of file reached
Connection options:
{"database"=>"...", "auth_source"=>"...", "retry_reads"=>true, "retry_writes"=>true, "user"=>"...", "password"=>"...", "write"=>{"w"=>1}, "read"=>{"mode"=>:primary}, "connect_timeout"=>1.5, "socket_timeout"=>60, "ssl"=>true, "ssl_verify"=>false, "compressors"=>["zlib"], "platform"=>"mongoid-6.4.1"}
Unlike with the Python issue, the Ruby driver doesn't seem to check the uncompressed message size at all, so it fails with any bulk write larger than 48MB.
When compression is disabled, the bulk write seems to be split correctly into smaller chunks.