When a CachedBSONObj is set to a BSONObj that is larger than the CachedBSONObj's fixed size buffer, the BSONObj is not copied and is instead flagged as being "tooBig". This causes the cached object to return the fixed value of
{ $msg : "query not recording (too large)" }
This is what shows up for large queries in currentOp(), profiling, and in 2.6, the logfiles.
Unfortunately, this is not particularly useful when large queries are causing performance problems. It also wastes space, because the CachedBSONObj's fixed size buffer is still present but unused.
Much better would be to copy as much of the BSONObj as possible into the CachedBSONObj's buffer.
For example, even a naive algorithm which walked the BSONObj's fields and memcpyed them one at a time until there's not enough space left in the buffer (at which point, copy in a suitable "$msg: "query truncated" field or similar). Even better would be something that can do this recursively, diving inside arrays and sub-documents and copying them (partially if necessary) until the buffer has been exhausted. Neither of these ought to add very much in the way of overheads. The pathological case would be an object with very many tiny fields, and this could be dealt with by capping the number of fields copied to, say, 100.
- is related to
-
SERVER-16324 Command execution log line displays "query not recording (too large)" instead of abbreviated command object
- Closed
- related to
-
SERVER-13935 Allow specifying profile entry and currentOp max query size
- Closed
-
SERVER-1794 make CurOp query's lifespan same as the op - so we can just keep a pointer
- Closed
-
SERVER-7677 Limit CurOp output to fixed size
- Closed
-
SERVER-5605 db.currentOp() - "query" : {"$msg" : "query not recording (too large)"
- Closed