We have seen a couple of cases in the field where under heavy cache pressure a node can get stuck with the WT cache full. In the cases seen so far the cache pressure has been due to replica set lag, which causes the majority commit point to lag, creating cache pressure.
So far we don't have a reproducer and don't have FTDC data covering the inception of the issue (SERVER-32876) so this ticket for now is a placeholder to gather information.
- depends on
-
SERVER-32876 Don't stall ftdc due to WT cache full
- Closed
-
WT-4105 Optimize cache usage for update workload with history pinned
- Closed
- is related to
-
SERVER-34938 Secondary slowdown or hang due to content pinned in cache by single oplog batch
- Closed
-
SERVER-34941 Add testing to cover cases where timestamps cause cache pressure
- Closed
-
SERVER-34942 Stuck with cache full during oplog replay in initial sync
- Closed
-
SERVER-36495 Cache pressure issues during recovery oplog application
- Closed
-
SERVER-36496 Cache pressure issues during oplog replay in initial sync
- Closed
- related to
-
SERVER-37849 Poor replication performance and cache-full hang on secondary due to pinned content
- Backlog
-
SERVER-35103 Checkpoint creates unevictable clean content
- Closed
-
SERVER-36221 [3.6] Performance regression on small updates to large documents
- Closed
-
SERVER-36238 replica set startup fails in wt_cache_full.js, initial_sync_wt_cache_full.js, recovery_wt_cache_full.js when journaling is disabled
- Closed
-
SERVER-36373 create test to fill WT cache during steady state replication
- Closed
-
WT-4106 Optimize lookaside implementation to the point where I/O can be saturated
- Closed
-
WT-4107 Reduce lookaside usage in update heavy workloads where history is pinned
- Closed
- mentioned in
-
Page Loading...