Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-13952

Cache partial BSONObj if possible (instead of {$msg: "query not recording (too large)"})

    • Type: Icon: Improvement Improvement
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: Logging, Querying
    • None

      When a CachedBSONObj is set to a BSONObj that is larger than the CachedBSONObj's fixed size buffer, the BSONObj is not copied and is instead flagged as being "tooBig". This causes the cached object to return the fixed value of

      { $msg : "query not recording (too large)" }
      

      This is what shows up for large queries in currentOp(), profiling, and in 2.6, the logfiles.

      Unfortunately, this is not particularly useful when large queries are causing performance problems. It also wastes space, because the CachedBSONObj's fixed size buffer is still present but unused.

      Much better would be to copy as much of the BSONObj as possible into the CachedBSONObj's buffer.

      For example, even a naive algorithm which walked the BSONObj's fields and memcpyed them one at a time until there's not enough space left in the buffer (at which point, copy in a suitable "$msg: "query truncated" field or similar). Even better would be something that can do this recursively, diving inside arrays and sub-documents and copying them (partially if necessary) until the buffer has been exhausted. Neither of these ought to add very much in the way of overheads. The pathological case would be an object with very many tiny fields, and this could be dealt with by capping the number of fields copied to, say, 100.

            Assignee:
            Unassigned Unassigned
            Reporter:
            kevin.pulo@mongodb.com Kevin Pulo
            Votes:
            1 Vote for this issue
            Watchers:
            8 Start watching this issue

              Created:
              Updated:
              Resolved: