Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-32139

Oplog truncation creates large amount of dirty data in cache

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.6.0-rc8
    • Component/s: WiredTiger
    • ALL
    • Storage Non-NYC 2018-03-12
    • 0

      1-node repl set, HDD, 500 GB oplog, 4 GB cache, 5 threads inserting 130 kB docs. Left is 3.6.0-rc8, right is 3.4.10:

      At the point A where the oplog fills and oplog truncation begins in 3.6.0-rc8 we see large amounts of oplog being read into cache and large amounts dirty data in cache and resulting operation stalls. This does not occur at the corresponding point B when running on 3.4.10.

      FTDC data for the above two runs attached.

        1. smalloplog.png
          smalloplog.png
          133 kB
        2. Screen Shot 2017-12-04 at 8.14.44 pm.png
          Screen Shot 2017-12-04 at 8.14.44 pm.png
          275 kB
        3. Screen Shot 2017-12-04 at 8.01.26 pm.png
          Screen Shot 2017-12-04 at 8.01.26 pm.png
          477 kB
        4. repro.sh
          0.7 kB
        5. optimes.png
          optimes.png
          136 kB
        6. master_with_wt-3805.png
          master_with_wt-3805.png
          81 kB
        7. master_vs_wt3767_3768.png
          master_vs_wt3767_3768.png
          173 kB
        8. dd-rc8.tar
          978 kB
        9. dd-3410.tar
          948 kB
        10. dd.wt3767_68.zip
          436 kB
        11. dd_wt3805.tgz
          1.02 MB
        12. dd_mdb_master.zip
          654 kB
        13. correlations.png
          correlations.png
          360 kB
        14. comparison.png
          comparison.png
          167 kB

            Assignee:
            luke.chen@mongodb.com Luke Chen
            Reporter:
            bruce.lucas@mongodb.com Bruce Lucas (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            23 Start watching this issue

              Created:
              Updated:
              Resolved: