-
Type: Bug
-
Resolution: Duplicate
-
Priority: Major - P3
-
None
-
Affects Version/s: 4.0.11
-
Component/s: Networking
-
None
-
Environment:Linux 4.15.0-55-generic #60~16.04.2-Ubuntu SMP Thu Jul 4 09:03:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
db version v4.0.11
git version: 417d1a712e9f040d54beca8e4943edce218e9a8c
OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
allocator: tcmalloc
modules: none
build environment:
distmod: ubuntu1604
distarch: x86_64
target_arch: x86_64
Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz
32GB of RAM
total used free shared buff/cache available
Mem: 32899580 4143572 23439176 1328 5316832 28245712
Swap: 7812092 768 7811324Linux 4.15.0-55-generic #60~16.04.2-Ubuntu SMP Thu Jul 4 09:03:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux db version v4.0.11 git version: 417d1a712e9f040d54beca8e4943edce218e9a8c OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016 allocator: tcmalloc modules: none build environment: distmod: ubuntu1604 distarch: x86_64 target_arch: x86_64 Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz 32GB of RAM total used free shared buff/cache available Mem: 32899580 4143572 23439176 1328 5316832 28245712 Swap: 7812092 768 7811324
-
ALL
-
Repl 2019-10-21, Service Arch 2019-11-18, Service Arch 2019-12-02, Service Arch 2019-12-16
-
(copied to CRM)
Good day.
After upgrading to 4.0 we've encountered some sort of memory leak in 2 cases:
1) In cluster environment described in SERVER-43038. I don't have diagnostic.data from that time, but possibly can extract some logs from our logging system.
Memory leak occured after we've shutdown our slaveDelay instances and lasted until we've started them again.
2) Recently in replica set configuration (i attached mongodb memory graphs from two servers (each one was primary at some time), server logs, diagnostic.data and configuration file).
Spike around 2-3pm 19.09 on db1-1 was restoration of 'drive' database with mongorestore. Also i did a rs.stepdown() and restart server intance db1-1 at 4:30pm 21.09 due to memory pressure (memory leak has moved to the new primary). At 11:25pm 21.09 we've disabled most of the processes that work with that replica set.
After i've started workload again i cannot reproduce memory leak. On the contrary you can see resident memory decreases on db1-2 from 3pm 23.09 and up to date)
- duplicates
-
SERVER-44567 Reimplement CommandState destructors for v4.0
- Closed
- related to
-
SERVER-41031 After an unreachable node is added and removed from the replica set, the other replica set members continue to send heartbeat to this removed node
- Open
-
SERVER-43038 Commit point can be stale on slaveDelay nodes and cause memory pressure
- Closed