Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-28138

rangeDeleter causes timeout

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: Sharding
    • None
    • ALL

      version: mongodb-linux-x86_64-debian81-3.4.2

      We periodically got operation exceeded time limit (8500ms) after moveChunk. It seems it's caused by rangeDeleter. During the range delete, the iostat shows w/s suddenly increase from 200 to 30000.

      2017-02-28T11:39:21.285+0800 I SHARDING [RangeDeleter] rangeDeleter deleted 126145 documents for media.media_tags from { _id: -2014043904481584736 } -> { _id: -2013480085714135420 }
      
      2017-02-28T11:39:21.064+0800 I COMMAND  [conn4336687] command media.media_tags command: find { find: "media_tags", filter: { _id: { $in: [ 6391998629359387141 ] } }, maxTimeMS: 5500, shardVersion: [ Timestamp 2812000|1, ObjectId('579a1adf7da6fc22e14e5ce0') ] } planSummary: IXSCAN { _id: 1 } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:201 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 5372778 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 5380ms
      2017-02-28T11:39:21.064+0800 I COMMAND  [conn4336864] command media.media_tags command: find { find: "media_tags", filter: { _id: { $in: [ 6391990642146279942 ] } }, maxTimeMS: 5500, shardVersion: [ Timestamp 2812000|1, ObjectId('579a1adf7da6fc22e14e5ce0') ] } planSummary: IXSCAN { _id: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:324 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 5956125 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 5964ms
      2017-02-28T11:39:21.059+0800 I COMMAND  [conn4337153] command media.media_tags command: find { find: "media_tags", filter: { _id: { $in: [ 6391998586543931909 ] } }, maxTimeMS: 8500, shardVersion: [ Timestamp 2812000|1, ObjectId('579a1adf7da6fc22e14e5ce0') ] } planSummary: IXSCAN { _id: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:324 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 3522470 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 3524ms
      2017-02-28T11:39:21.059+0800 I COMMAND  [conn4336967] command media.media_tags command: find { find: "media_tags", filter: { _id: 6391998563672392198 }, limit: 1, maxTimeMS: 5500, shardVersion: [ Timestamp 2812000|1, ObjectId('579a1adf7da6fc22e14e5ce0') ] } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:324 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 4802194 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 4805ms
      2017-02-28T11:39:21.064+0800 I COMMAND  [conn4337415] command media.media_tags command: find { find: "media_tags", filter: { _id: 6391998570349724165 }, limit: 1, maxTimeMS: 5500, shardVersion: [ Timestamp 2812000|1, ObjectId('579a1adf7da6fc22e14e5ce0') ] } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:331 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 1090055 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1096ms
      2017-02-28T11:39:21.064+0800 I COMMAND  [conn4336713] command media.media_tags command: find { find: "media_tags", filter: { _id: { $in: [ 6391998617212682757, 6391998630017892869 ] } }, maxTimeMS: 5500, shardVersion: [ Timestamp 2812000|1, ObjectId('579a1adf7da6fc22e14e5ce0') ] } planSummary: IXSCAN { _id: "hashed" } keysExamined:3 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:338 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 5374038 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 5382ms
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
      sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
      sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
      sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                 2.95    0.00    2.38    0.00    0.00   94.67
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
      sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
      sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
      sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                 1.28    0.00    3.08    0.73    0.00   94.91
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
      sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
      sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
      sdc               0.00     5.00    0.00 22046.00     0.00 235984.00    21.41    73.18    3.27    0.00    3.27   0.03  64.40
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                 3.00    0.00    2.37    1.38    0.00   93.25
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
      sda               0.00     0.00    0.00    3.00     0.00    12.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00
      sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
      sdc               0.00    53.00  492.00 27173.00  5336.00 206907.00    15.34    63.30    2.33    2.30    2.33   0.04  98.40
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                 0.88    0.00    0.45    1.08    0.00   97.60
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
      sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
      sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
      sdc               0.00    82.00  227.00  206.00  2664.00   928.50    16.59     1.63    3.76    2.82    4.80   2.30  99.60
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                 0.85    0.00    0.40    1.10    0.00   97.64
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
      sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
      sdb               0.00     2.00    0.00    2.00     0.00    16.00    16.00     0.02   12.00    0.00   12.00  12.00   2.40
      sdc               1.00    82.00  254.00  208.00  2772.00   924.00    16.00     1.84    3.99    3.42    4.69   2.15  99.20
      

        1. diagnostic.data.tar.gz
          31.58 MB
        2. mongodb.log.gz
          2.45 MB

            Assignee:
            mark.agarunov Mark Agarunov
            Reporter:
            bydsky xihui he
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: