Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-32139

Oplog truncation creates large amount of dirty data in cache

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.6.0-rc8
    • Component/s: WiredTiger
    • ALL
    • Storage Non-NYC 2018-03-12
    • 0

      1-node repl set, HDD, 500 GB oplog, 4 GB cache, 5 threads inserting 130 kB docs. Left is 3.6.0-rc8, right is 3.4.10:

      At the point A where the oplog fills and oplog truncation begins in 3.6.0-rc8 we see large amounts of oplog being read into cache and large amounts dirty data in cache and resulting operation stalls. This does not occur at the corresponding point B when running on 3.4.10.

      FTDC data for the above two runs attached.

        1. comparison.png
          167 kB
          Bruce Lucas
        2. correlations.png
          360 kB
          Bruce Lucas
        3. dd_wt3805.tgz
          1.02 MB
          Luke Chen
        4. dd-3410.tar
          948 kB
          Bruce Lucas
        5. dd-rc8.tar
          978 kB
          Bruce Lucas
        6. master_vs_wt3767_3768.png
          173 kB
          Neha Khatri
        7. master_with_wt-3805.png
          81 kB
          Luke Chen
        8. optimes.png
          136 kB
          Bruce Lucas
        9. repro.sh
          0.7 kB
          Bruce Lucas
        10. Screen Shot 2017-12-04 at 8.01.26 pm.png
          477 kB
          Alexander Gorrod
        11. Screen Shot 2017-12-04 at 8.14.44 pm.png
          275 kB
          Alexander Gorrod
        12. smalloplog.png
          133 kB
          Bruce Lucas

            Assignee:
            luke.chen@mongodb.com Luke Chen
            Reporter:
            bruce.lucas@mongodb.com Bruce Lucas (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            23 Start watching this issue

              Created:
              Updated:
              Resolved: