Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-3408

Replace the btree overflow lock with a generation number.

    • Type: Icon: Improvement Improvement
    • Resolution: Won't Fix
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: Btree
    • None

      I was thinking about WT-3402 last night (the changes to cache removed overflow values in the key's WT_UPDATE list rather than in a separate list in the WT_PAGE_MODIFY structure), and it occurred to me we could replace WT_BTREE.ovfl_lock with a generation number.

      If each cursor operation entered an "overflow" generation before searching the tree and reconciliation waited for the overflow generation to drain before removing the backing overflow blocks from the file, I think we'd be guaranteed any cursor operating in the tree would see the in-memory versions of removed key/value overflow items without requiring an overflow lock.

      We'd no longer have to acquire any locks to read overflow items.

      Additionally, we would no longer have to track removed overflow values across reconciliations, that is, the list of cached overflow value cell/update address pairs WT-3402 maintains in WT_PAGE_MODIFY.ovfl_track.remove just goes away.

      The only downside I see is we'd have to enter the overflow generation even if there aren't any overflow objects in the tree (technically, we could probably limit that set to pages where there are overflow objects, since we actually know that, but I haven't thought it through).

      Also there's some trickiness in avoiding generational self-deadlock if a thread searching the tree gets co-opted to do eviction.

      sulabh.mahajan, alexander.gorrod, michael.cahill: thoughts?

            Assignee:
            backlog-server-storage-engines [DO NOT USE] Backlog - Storage Engines Team
            Reporter:
            keith.bostic@mongodb.com Keith Bostic (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: