-
Type: Bug
-
Resolution: Works as Designed
-
Priority: Major - P3
-
None
-
Affects Version/s: 4.4.1
-
Component/s: Aggregation Framework
-
Environment:MongoDB:
MongoDB shell version v4.4.1
Build Info: {
"version": "4.4.1",
"gitVersion": "ad91a93a5a31e175f5cbf8c69561e788bbc55ce1",
"openSSLVersion": "OpenSSL 1.1.1f 31 Mar 2020",
"modules": [],
"allocator": "tcmalloc",
"environment": {
"distmod": "ubuntu2004",
"distarch": "x86_64",
"target_arch": "x86_64"
}
}
Ubuntu:
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal
Node.js:
node -v
v14.8.0
package.json:
"engines": {
"node": "14.x"
},
"dependencies": {
"mongodb": "^3.6.2",
}MongoDB: MongoDB shell version v4.4.1 Build Info: { "version": "4.4.1", "gitVersion": "ad91a93a5a31e175f5cbf8c69561e788bbc55ce1", "openSSLVersion": "OpenSSL 1.1.1f 31 Mar 2020", "modules": [], "allocator": "tcmalloc", "environment": { "distmod": "ubuntu2004", "distarch": "x86_64", "target_arch": "x86_64" } } Ubuntu: lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.1 LTS Release: 20.04 Codename: focal Node.js: node -v v14.8.0 package.json: "engines": { "node": "14.x" }, "dependencies": { "mongodb": "^3.6.2", }
-
ALL
-
I have a newly installed MongoDB server (v4.4.1) running on an AWS Ubuntu EC2. The server has nothing else installed on it. The DB currently contains 35 documents of 3 MB each, meaning less than 110 MB of data.
I'm running an aggregation query that contains 2 stages: a simple $match stage that makes sure that only 31 documents are aggregated, and a $group stage with $accumulator inside (that stage doesn't allocate much new space, and even if it did, I expect that when handling with 3 MB documents it won't be much more). The query's result should be a single 3 MB document that is a merger of all of the above.
Running the query when only 3 documents exist - finishes without a problem. But with 35 - I consistently get the following error:
MongoError: Out of memory at MessageStream.messageHandler (/app/node_modules/mongodb/lib/cmap/connection.js:268:20) at MessageStream.emit (events.js:314:20) at processIncomingData (/app/node_modules/mongodb/lib/cmap/message_stream.js:144:12) at MessageStream._write (/app/node_modules/mongodb/lib/cmap/message_stream.js:42:5) at writeOrBuffer (_stream_writable.js:352:12) at MessageStream.Writable.write (_stream_writable.js:303:10) at Socket.ondata (_stream_readable.js:717:22) at Socket.emit (events.js:314:20) at addChunk (_stream_readable.js:307:12) at readableAddChunk (_stream_readable.js:282:9)
I've read a lot online and couldn't solve it, I've tried:
- Upgrading the server's hardware to t3.large (8 GB RAM).
- Adding allowDiskUse to the query:
const options = { allowDiskUse: true, // explain: true, }; mongoConn.db(dbConfig.database).collection(collection).aggregate(aggregationPipeline, options).toArray((err: MongoError, result: any) => {
- With htop, I can see the total memory usage of 210M/7.68G after restarting MongoDB, and during the query it climbs to a peak of 691M/7.68G, fails, and remains on 627M/7.68G afterward.
- With db.enableFreeMonitoring() I can see a constant 2 GB virtual memory usage, with peaks to 2.1 GB:
Summary: I know that MongoDB has a 100MB memory limit, but I guess that it shouldn't reach it with 3 MB documents, and allowDiskUse. What am I missing here?
- related to
-
SERVER-51404 Improve log messages when aggregation stage hits the 100 MB limit
- Closed