Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-56349

Secondaries apply in-place updates to large documents slower than primaries

    • Type: Icon: Bug Bug
    • Resolution: Unresolved
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: None
    • Storage Execution
    • ALL

      In-place updates to large documents can be slower on secondaries than primaries, which will result in secondary lag.

      This only affects "in-place" updates that change a field's value, but not the document's size.

      This is because writers on the primary may generate WT_MODIFY structures (called "damages" by MongoDB) in parallel update stages, and the algorithm does not depend on document size. When these updates are replicated, however, the secondary has to reconstitute the new document and compute a binary diff of the document to generate the WT_MODIFY structures. This can be slower than the operation on the primary, and it scales with the size of the document.

      We do not replicate enough information for the secondary to generate WT_MODIFY structures without having to compute a binary diff of the document. We should consider improvements to doc_diff (i.e. used to efficiently replicate updates) that allow it to create WT_MODIFY updates directly.

            Assignee:
            backlog-server-execution [DO NOT USE] Backlog - Storage Execution Team
            Reporter:
            louis.williams@mongodb.com Louis Williams
            Votes:
            0 Vote for this issue
            Watchers:
            16 Start watching this issue

              Created:
              Updated: