Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-18677

Throughput drop during transaction pinned phase of checkpoints under WiredTiger (larger data set)

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.1.3
    • Component/s: WiredTiger
    • None
    • Fully Compatible
    • ALL

      • 132 GB memory, 32 processors, slowish SSDs, 20 GB WT cache
      • YCSB 10 fields/doc, 50/50 workload (zipfian distribution), 20 threads
      • data set size varies - see tests below
        • 10M docs, data set ~12 GB (but cache usage can be double that)
        • 20M docs, data set ~23 GB (but cache usage can be double that)

      In SERVER-18315 an issue was reported in 3.0.2 during the "transaction pinned" phase of checkpoints (B-C) with a 10M document data set:

      This was fixed in 3.1.3:

      However if the data set is increased to 20M documents a similar problem still appears in 3.1.3. Based on the shapes of the curves it appears this may be a little different issue: in SERVER-18315 the throughput dropped in proportion to the rise in "range of transactions pinned" but that does not seem to be the case here.

      (Note: C-D is a different issue - see SERVER-18674).

        1. try-64-gdb.png
          try-64-gdb.png
          562 kB
        2. try-52.png
          try-52.png
          99 kB
        3. try-49b.png
          try-49b.png
          240 kB
        4. try-49a.png
          try-49a.png
          84 kB
        5. try-33.png
          try-33.png
          62 kB

            Assignee:
            david.hows David Hows
            Reporter:
            bruce.lucas@mongodb.com Bruce Lucas (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

              Created:
              Updated:
              Resolved: