-
Type: Improvement
-
Resolution: Done
-
Priority: Major - P3
-
None
-
Affects Version/s: 2.8.0-rc3
-
Component/s: Storage, WiredTiger
-
Storage Execution
-
Fully Compatible
I am running the iibench for MongoDB on two hosts. Both use the WT b-tree, one with snappy and the other with zlib compression. The test uses 10 threads to load 200M documents per thread. Immediately after the load is done the zlib test uses ~500G versus ~330G for snappy. With some idle time the zlib disk usage drops to ~160G. At first I thought the problem was only for zlib, but watching the size of data/journal during tests shows problems for snappy and zlib.
My test host has 40 hyperthread cores, 144G RAM and fast (PCIe) flash
Can WiredTiger delete log files sooner?
I have a test in progress and it is about 30% done. The problem for the zlib configuration is too much space used in the journal directory. Note that "data" is the root for database files:
du -hs data data/journal; ls data/journal | wc -l 163G data 131G data/journal 1337
And this is from the snappy test at about 20% done:
du -hs data data/journal; ls data/journal | wc -l 69G data 23G data/journal 231
And this is from later in the snappy test:
du -hs data data/journal; ls data/journal | wc -l 120G data 47G data/journal 475
ls -lh data/journal/ total 25G -rw-r--r-- 1 root root 100M Dec 24 07:58 WiredTigerLog.0000001012 -rw-r--r-- 1 root root 100M Dec 24 07:58 WiredTigerLog.0000001013 -rw-r--r-- 1 root root 100M Dec 24 07:58 WiredTigerLog.0000001014 <snip>
And later in the zlib test:
du -hs data data/journal; ls data/journal | wc -l 231G data 182G data/journal 1862
- depends on
-
SERVER-16736 support more than 1 checkpoint thread for WiredTiger
- Closed
-
SERVER-16737 support eviction_dirty_trigger for WiredTiger
- Closed
- is related to
-
WT-2764 Optimize checkpoints to reduce throughput disruption
- Closed
-
WT-2389 Spread Out I/O Rather Than Spiking During Checkpoints
- Closed
- related to
-
SERVER-16575 intermittent slow inserts with WiredTiger b-tree
- Closed