-
Type: Bug
-
Resolution: Gone away
-
Priority: Major - P3
-
None
-
Affects Version/s: 1.12.0, 1.12.1
-
Component/s: None
Detailed steps to reproduce the problem?
After launching the project-service in a pod in Kubernetes (AKS, mostly current release), it works properly and has reasonable resource consumption.
After a few days, the memory consumption has steadily increased/ramped-up, as a slow pace despite the service not being accessed.
Please view the image entitled: 20231006_191500MST_LEAK_K8S_Resources.png.
Definition of done: what must be done to consider the task complete?
Launch our web service and send a few queries to pull data from the Atlas MongoDB database.
Let the web service be inactive, but connected via a Mongo Client to the database for 24-hours.
Examine the memory consumption over 24-hours. If the level remains fairly constant, due to the api being inactive, then all is well.
Otherwise, if the memory consumption is slowly increasing, there is still a leak.
The exact Go version used, with patch level:
{{$ }}go version
go1.21.1 darwin/amd64
The exact version of the Go driver used:
$ go list -m go.mongodb.org/mongo-driver
go.mongodb.org/mongo-driver v1.12.1
Describe how MongoDB is set up. Local vs Hosted, version, topology, load balanced, etc.
We have a cluster at Atlas MongoDB. In the cluster, we have a database named 'kdev-csc'. Within the database, there is a collection named 'projects'. We use SHA256 authentication.
The operating system and version (e.g. Windows 7, OSX 10.8, ...)
The web service runs in a Kubernetes v1.27.3 pod. The image is based on Linux x64.
Security Vulnerabilities
If you’ve identified a security vulnerability in a driver or any other MongoDB project, please report it according to the instructions here:
NONE