-
Type: Task
-
Resolution: Done
-
Affects Version/s: None
-
Component/s: None
I don't think this is a bug, but it's worth a bit of discussion.
I'm running a particular wtperf workload and have noticed that memory consumption tends to exceed cache size by ~1 and 1.5GB in top. I ran memory profiling, and it seems as though that excess is temporary but real. It comes from *wt_split_evict - in the first profile you can see that *wt_split_evict has allocated 1318MB, where in the second profile it's allocated 240MB. During the run I see the amount of space allocated in each profile vary between ~30MB and 1318MB.
The configured cache size is 2.5GB, and in the first profile WiredTiger has 4376MB allocated, in the second profile it has 3376MB allocated.
The wtperf configuration is:
# wtperf options file: small btree multi-database configuration conn_config="eviction_dirty_target=80,statistics_log=(wait=20),session_max=256,statistics=[fast,clear],cache_size=2500MB,log=(enabled=false),checkpoint_sync=false" database_count=1 table_count=5 table_config="leaf_page_max=4k,internal_page_max=16k,leaf_item_max=1433,internal_item_max=3100,type=file" # Likewise, divide original icount by database_count. icount=50000 populate_threads=1 random_range=20000000 checkpoint_interval=120 checkpoint_threads=1 report_interval=20 run_time=1200 threads=((count=40,reads=1),(count=40,inserts=1)) value_sz=250 verbose=1
You can see that it does not have excessively large leaf or internal pages configured, and is using a regular btree table.
keithbostic Any thoughts? Is this just how the code works?