Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-55341

WiredTigerRecordStore should reserve contiguous RecordIds for batch insert

    • Type: Icon: Improvement Improvement
    • Resolution: Fixed
    • Priority: Icon: Major - P3 Major - P3
    • 6.1.0-rc0
    • Affects Version/s: None
    • Component/s: None
    • None
    • Fully Compatible
    • Execution Team 2022-05-16

      Currently we do a separate atomic increment on _nextIdNum for each document in a batch. In addition to doing significantly more atomic ops than is necessary, it risks other concurrent inserts grabbing interleaving ids rather than having a nice contiguous range for this batch. I suspect this could lead to increased contention inside WT vs each thread having a private range to play in. This would also stack nicely with SERVER-55337 and SERVER-55338

      Instead we should increment _nextIdNum once by the number of documents in the batch, and then hand out ids from a local counter. Possible patch:

      Unable to find source-code formatter for language: diff. Available languages are: actionscript, ada, applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml
      diff --git a/src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp b/src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp
      index 1fca344df7..93c5702e40 100644
      --- a/src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp
      +++ b/src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp
      @@ -1536,6 +1536,7 @@ Status WiredTigerRecordStore::_insertRecords(OperationContext* opCtx,
       
           Record highestIdRecord;
           invariant(nRecords != 0);
      +    auto recIdGen = _isOplog ? RecordId() : _nextId(opCtx, nRecords);
           for (size_t i = 0; i < nRecords; i++) {
               auto& record = records[i];
               if (_isOplog) {
      @@ -1545,7 +1546,8 @@ Status WiredTigerRecordStore::_insertRecords(OperationContext* opCtx,
                       return status.getStatus();
                   record.id = status.getValue();
               } else {
      -            record.id = _nextId(opCtx);
      +            record.id = recIdGen;
      +            recIdGen = RecordId(recIdGen.as<int64_t>() + 1);
               }
               dassert(record.id > highestIdRecord.id);
               highestIdRecord = record;
      @@ -2029,10 +2031,10 @@ void WiredTigerRecordStore::_initNextIdIfNeeded(OperationContext* opCtx) {
           _nextIdNum.store(nextId);
       }
       
      -RecordId WiredTigerRecordStore::_nextId(OperationContext* opCtx) {
      +RecordId WiredTigerRecordStore::_nextId(OperationContext* opCtx, int64_t numIds) {
           invariant(!_isOplog);
           _initNextIdIfNeeded(opCtx);
      -    RecordId out = RecordId(_nextIdNum.fetchAndAdd(1));
      +    RecordId out = RecordId(_nextIdNum.fetchAndAdd(numIds));
           invariant(out.isNormal());
           return out;
       }
      diff --git a/src/mongo/db/storage/wiredtiger/wiredtiger_record_store.h b/src/mongo/db/storage/wiredtiger/wiredtiger_record_store.h
      index 85275dca25..1fd2131a9d 100644
      --- a/src/mongo/db/storage/wiredtiger/wiredtiger_record_store.h
      +++ b/src/mongo/db/storage/wiredtiger/wiredtiger_record_store.h
      @@ -294,7 +294,7 @@ class WiredTigerRecordStore : public RecordStore {
                                 const Timestamp* timestamps,
                                 size_t nRecords);
       
      -    RecordId _nextId(OperationContext* opCtx);
      +    RecordId _nextId(OperationContext* opCtx, int64_t numIds = 1);
           bool cappedAndNeedDelete() const;
           RecordData _getData(const WiredTigerCursor& cursor) const;
      

            Assignee:
            yujin.kang@mongodb.com Yujin Kang Park
            Reporter:
            mathias@mongodb.com Mathias Stearn
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: