Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-10470

Review benchmarks and hardware used for automated performance testing

    • Type: Icon: Task Task
    • Resolution: Unresolved
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: None
    • StorEng - Refinement Pipeline

      The WiredTiger automated performance tests aim to give signal about introduced performance regressions.

      They are currently run using "regular" Evergreen hosts, which don't provide predictable I/O performance. So any performance test that requires I/O won't result in repeatable results.

      andrew.morton@mongodb.com encountered this recently when reviewing the performance consequence of changing how assertions are implemented in WiredTiger.

      We should:

      • Review the hardware used when running performance tests, and ensure it is capable of generating repeatable results.
      • Review the workloads being run in our automated performance testing, and ensure that the workloads could give predictable results (i.e: is generally within the capacity of the hardware being used).
      • Clearly identify tests that exceed the hardware capability and identify the value they provide. Attempt to describe metrics that can be predictably measured from those tests, or at least document a description of their value somewhere.

            Assignee:
            backlog-server-storage-engines [DO NOT USE] Backlog - Storage Engines Team
            Reporter:
            alexander.gorrod@mongodb.com Alexander Gorrod
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated: