-
Type: Build Failure
-
Resolution: Unresolved
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: Cache and Eviction
-
Storage Engines
-
2
-
StorEng - Refinement Pipeline
This is a suggested change which is related to a BF that I recently worked on. Essentially WiredTiger allows the application to insert data larger than the cache itself, which is also larger than the configured dirty trigger. Doing so in the right context can lead to cache stuck and crashing WiredTiger. This is primarily due to the fact that at this stage uncommitted content is un-evictable and every application thread will get stuck in eviction trying to evict it, including of course the thread that inserted the value.
There are a few different scenarios to consider:
- The application thread that did the insert is also the oldest transaction in the system. In this case the next operation it performs will receive a rollback. Unless that operation is rollback or commit.
- The application isn't the oldest transaction, in this case if any of the earlier transactions haven't performed a write they cannot be rolled back and the application will crash.
This suggestion is to add a guardrail into WiredTiger. This would prevent MongoDB users from being able to end up in this situation. Additionally we'd avoid future test failures along the lines of the linked BF. Limiting insert sizes to a configurable maximum, e.g. the dirty_trigger could be weird, so I'd be open to other ideas, such as a configurable insert size limit.
- is duplicated by
-
WT-9339 Inserts can be committed despite going over the cache size limit
- Closed
-
WT-12918 Cache usage exceeded in cppsuite-hs-cleanup-default (6.0)
- Closed
-
WT-13271 failed: cppsuite-hs-cleanup-default on ubuntu2004-asan [wiredtiger-mongo-v6.0 @ 4580d8cc]
- Closed
- is related to
-
SERVER-90387 Non-deterministic behavior in transaction_too_large_for_cache/temporarily_unavailable_on_secondary_transaction_application.js
- Closed