-
Type: Bug
-
Resolution: Done
-
Priority: Major - P3
-
Affects Version/s: 3.0.0
-
Component/s: Async, Connection Management
-
None
-
Environment:Linux jdk1.6
When restarting the server the NettyStream's channel does not capture socket closed events and any pending readers are left hanging.
Was:
After restart server, MongoDB Async Java Client can not recover all connections
Hey guys.
I test mongodb like this:
- start one mongodb instance(3.2.9) on a Linux server as a server-side
- start 40 mongodb java async client(3.3.0) processes on another two servers, Linux too. For each process, I use 100 connections.
- as we can see on mongostat outputs, 4040 connections total. each process with a monitor connection. so far so good
- restart mongo-server. however mongostat shows connections is much less than 4040.
Some information that might be helpful:
- I was using 3.0.2 at the very beginning, at that time, this problem is pretty much worse than current 3.3.0. In fact, I saw some issues that fixed some bugs about connection pool. So I did the upgrade.
- I opened the trace log with log4j. What I found seems like some connections go into some logical branch, maybe some exception, but forget to close the connection and release it into the com.mongodb.internal.connection.ConcurrentPool.available and notify "Semaphore permits". One potential place I found is com.mongodb.connection.InternalStreamConnection.readAsync(int, SingleResultCallback<ByteBuf>)
try { stream.readAsync(numBytes, new AsyncCompletionHandler<ByteBuf>() { @Override public void completed(final ByteBuf buffer) { callback.onResult(buffer, null); } @Override public void failed(final Throwable t) { close(); callback.onResult(null, translateReadException(t)); } }); } catch (Exception e) { callback.onResult(null, translateReadException(e)); }
without a close() statement.
Thanks for your guys paying attention.