ISSUE SUMMARY
In MongoDB 3.0 nodes running with the WiredTiger storage engine, an integer overflow condition may cause a replica set to lose write availability when write concern is bigger than 1.
Under write-intensive workloads, it is possible for the oplog of a replica set to grow past its configured size. If this happens, the system will attempt to remove up to 20,000 documents from the oplog to shrink it. If the total size of those 20,000 documents exceeds 2GB, this document removal will result in an overflow condition in the 32-bit integer that records the size change.
As a result, the size change will be improperly recorded while the oplog will still appear to exceed the maximum configured size, so the system will attempt to delete more data from the oplog. In extreme cases this can result in the entire contents of the oplog being deleted.
While regular capped collections can be affected by this bug as well, it is very unlikely given the nature of this bug.
USER IMPACT
If this bug is triggered under the conditions described above, replication will cease and the affected replica set will need to be recovered manually.
In the unlikely case a regular capped collection is affected, the system will remove data from the capped collection at a faster than normal pace, so it is possible that the collection is emptied completely.
WORKAROUNDS
No workarounds exist for this issue. MongoDB users running or wishing to run with the WiredTiger storage engine must upgrade to 3.0.10 or newer. MongoDB 3.2 is not affected by this bug, so users may also consider upgrading to MongoDB version 3.2.3 or newer.
AFFECTED VERSIONS
Only MongoDB 3.0 users running with the WiredTiger storage engine may be affected by this issue. No other configuration of MongoDB is affected.
FIX VERSION
The fix is included in the 3.0.10 production release. MongoDB 3.2 is not affected.
Original description
In wiredtiger_record_store.cpp, _increaseDataSize is declared to take an int for the size change:
void WiredTigerRecordStore::_increaseDataSize(OperationContext* txn, int amount)
But when called from cappedDeleteAsNeeded_inlock, the amount may overflow a 32-bit int if many large records are being deleted, resulting in (very) inaccurate accounting of the size of an oplog. This can result in the oplog deleter thread deleting everything in the oplog in order to try to get it back down to the configured maximum size, causing replication to cease.
- is duplicated by
-
SERVER-22717 Sudden (huge) spike in Oplog GB/hour on primary member
- Closed
- is related to
-
SERVER-19800 DataSizeChange forces an int into a bool
- Closed