-
Type: Bug
-
Resolution: Fixed
-
Priority: Major - P3
-
Affects Version/s: 4.0.0
-
Component/s: Reactive Streams
-
None
-
Environment:MongoDB server 4.2.3 running on Linux.
I'm not sure this is a problem that started from version 4 of the driver. Here goes.
Up to now I was using `org.mongodb:mongodb-driver-reactivestreams:1.13.0`. This was my code:
MongoClientSettings.Builder settingsBuilder = MongoClientSettings
.builder()
.applyToClusterSettings(
clusterSettingsBuilder -> clusterSettingsBuilder.hosts(configuration.getAddresses()).serverSelectionTimeout(5, TimeUnit.SECONDS))
.compressorList(Arrays.asList(MongoCompressor.createZlibCompressor(), MongoCompressor.createSnappyCompressor()))
.applyToConnectionPoolSettings(connectionPoolSettingsBuilder -> connectionPoolSettingsBuilder.maxWaitQueueSize(20_000))
.applyToSocketSettings(socketSettingsBuilder -> socketSettingsBuilder.connectTimeout(5, TimeUnit.SECONDS))
.writeConcern(WriteConcern.MAJORITY.withJournal(Boolean.TRUE).withWTimeout(5, TimeUnit.SECONDS));
As you see I was using the deprecated `maxWaitQueueSize` because otherwise my code was getting exceptions because of "Too many operations are already waiting for a connection". Now, with that method gone in version 4, running my code also led to millions of exceptions but this time due to a different problem. What I am getting every time is this:
Got socket exception on connection [connectionId{localValue:21, serverValue:11252}] to localhost:27017. All connections to localhost:27017 will be closed.
I enabled logging for the driver and followed this "localValue:21" to see what happens with it. This is the chain of commands I see:
Checked out connection [connectionId{localValue:21}] to server localhost:27017 Asynchronously opening pooled connection connectionId{localValue:21} to server ServerId{clusterId=ClusterId{value='5e621d085a1fa329367de3a4', description='null'}, address=localhost:27017} Pooled connection connectionId{localValue:21} to server ServerId{clusterId=ClusterId{value='5e621d085a1fa329367de3a4', description='null'}, address=localhost:27017} is not yet open Opened connection [connectionId{localValue:21, serverValue:11252}] to localhost:27017 Pooled connection connectionId{localValue:21, serverValue:11252} to server ServerId{clusterId=ClusterId{value='5e621d085a1fa329367de3a4', description='null'}, address=localhost:27017} is now open Sending command ... Execution of command with request id 106 completed successfully in 1.64 ms on connection [connectionId{localValue:21, serverValue:11252}] to server localhost:27017 Checked in connection [connectionId{localValue:21, serverValue:11252}] to server localhost:27017 Closing connection connectionId{localValue:21, serverValue:11252} Checked out connection [connectionId{localValue:21, serverValue:11252}] to server localhost:27017 Asynchronously opening pooled connection connectionId{localValue:21, serverValue:11252} to server ServerId{clusterId=ClusterId{value='5e621d085a1fa329367de3a4', description='null'}, address=localhost:27017} Pooled connection connectionId{localValue:21, serverValue:11252} to server ServerId{clusterId=ClusterId{value='5e621d085a1fa329367de3a4', description='null'}, address=localhost:27017} is already open WARN org.mongodb.driver.connection - Got socket exception on connection [connectionId{localValue:21, serverValue:11252}] to localhost:27017. All connections to localhost:27017 will be closed.
As you see, closing of the connection is initiated but then the connection is checked out to be used again. I guess that is what leads to this issue.
It happens every single time. What I did to "fix" it was:
.applyToConnectionPoolSettings( connectionPoolSettingsBuilder -> connectionPoolSettingsBuilder.minSize(connectionPoolSize).maxSize(connectionPoolSize))
Setting the connections to not expire seems to fix the problem (which acknowledges what I'm saying about a race-condition somewhere).
- is duplicated by
-
JAVA-3616 java.lang.IllegalStateException: Attempted to decrement the reference count below 0
- Closed