Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-82551

Use parallel compressor to speedup binaries archival

    • Type: Icon: Improvement Improvement
    • Resolution: Fixed
    • Priority: Icon: Major - P3 Major - P3
    • 7.2.0-rc0, 7.0.13
    • Affects Version/s: None
    • Component/s: None
    • None
    • Server Development Platform
    • Fully Compatible
    • v7.0
    • Dev Tools 2020-04-06

      Summary

      Using pigz parallel compressor to create binary tarball would reduce archive_dist_test_debug task runtime from ~14 min to ~5 min

      Long description

      The majority of evergreen tasks will run only after mongo binaries have been compiled, compressed and uploaded to S3.

      For amazon linux 2 variant these steps will take roughly:

      In archive_dist_test_debug is mostly composed by two parts:

      The compression is performed using the following tar command:

      /bin/tar -C build/install -T /data/mci/5098d994527fa548b1195cf0b5831e45/src/mongo-debugsymbols.tgz.filelist -czf mongo-debugsymbols.tgz
      

      tar by default use single thread compression algorithm, this means that we are using only 1 out of the 16 cores available (we currently use amazon2-arm64-large for this task).

      It is possible to simply tell tar command to use the parallel compessor pigz to make use of all the available core.

      A quick experiment showed how using pigz will reduce the tar command execution time from 9.22 min to 35 seconds.

        1. use_pigz_compressor.patch
          0.7 kB
          Tommaso Tocci

            Assignee:
            tommaso.tocci@mongodb.com Tommaso Tocci
            Reporter:
            tommaso.tocci@mongodb.com Tommaso Tocci
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

              Created:
              Updated:
              Resolved: