-
Type: Bug
-
Resolution: Done
-
Priority: Major - P3
-
None
-
Affects Version/s: 3.2.7, 3.2.8
-
Component/s: Performance, WiredTiger
-
ALL
We are unable to recreate this problem in 3.2.5. It appears to have existed since 3.2.7.
One of our database systems which houses approximately 90 databases, most of which are less than 2GB in size, encounters an extreme degredation in performance when a particularly large collection (12GB) is loaded into the cache. The cacheSizeGB on this system is 32GB.
The problem only occurs if the WiredTiger cache is full AND the large collection appears loaded into the cache (speculation on the last part). In this scenario, requests that typically take 1ms begin to take 50-500ms, and the slow request log blossoms quickly.
We can replicate the above scenario by performing a simple query on the large collection when the WiredTiger cache is at or near its cache limit.
We've also been able to verify that the slowness does not happen when the database is otherwise taxed with a full cache. So we can execute a very long running map/reduce on other data and not impact performance. Only when the giant collections are loaded do we see the issue.
- is duplicated by
-
SERVER-25663 Odd connection timeouts and rejections when replicaset secondary is lagged
- Closed
-
SERVER-25760 Mongodump taking extraordinarily long, utilizing almost zero resources, yet slowing down server
- Closed