-
Type: Bug
-
Resolution: Done
-
Priority: Major - P3
-
None
-
Affects Version/s: 2.0.1
-
Component/s: Sharding
-
None
-
Environment:SunOS 5.11 oi_148 i86pc i386 i86pc, 64 GB RAM
-
Solaris
We had this problem before occasionally. But now we started to shard a collection with about 120GB data and it is not possible to keep the balancer active.
After some time mongod instances (master + 2 slaves) on the first shard begin to SegFault every few minutes.
I've disabled the balancer and mongod instances keep running again.
This happens with 2.0.1 as well as 2.0.2-rc1. My impression is that high load leads to these SegFaults.
Sun Dec 4 17:20:37 Invalid access at address: 0xfffffd59fedb32cc
Sun Dec 4 17:20:38 Got signal: 11 (Segmentation Fault).
Sun Dec 4 17:20:38 Backtrace:
Logstream::get called in uninitialized state
Sun Dec 4 17:20:41 [conn49] end connection 192.168.151.20:39137
Logstream::get called in uninitialized state
Sun Dec 4 17:20:41 [initandlisten] connection accepted from 192.168.151.20:37360 #50
Logstream::get called in uninitialized state
Sun Dec 4 17:20:41 ERROR: Client::~Client _context should be null but is not; client:rsSync
Logstream::get called in uninitialized state
Sun Dec 4 17:20:41 ERROR: Client::shutdown not called: rsSync
- depends on
-
SERVER-4350 Segmentation fault on replica recovery
- Closed