-
Type: Bug
-
Resolution: Unresolved
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
None
-
Catalog and Routing
-
Fully Compatible
-
ALL
-
v8.0
-
CAR Team 2024-04-15, CAR Team 2024-04-29, CAR Team 2024-05-13
-
143
The issue is coming from the following events:
- A collection cloning starts on the syncing shard
- Index build starts on the syncing shard (_id_)
- Collection dropped on the primary shard
- Index build stops on the syncing shard (_id_)
- Uncommitted indices get dropped on the syncing shard
- Oplog replay begins on the syncing shard
- An oplog entry relies on the _id_ index on the syncing shard
- Crash
The regression is coming from SERVER-78615
The suggested solution is to not to perform modifications on views while initial data sync happens, and when onInitialDataAvailable happens, reload the views
- depends on
-
SERVER-90268 Investigate potential issues due to lack of ordering in onInitialDataAvailable calls
- Open
- is depended on by
-
SERVER-78615 Poor view drop performance leads to replication lag
- Blocked
- is duplicated by
-
SERVER-89345 Investigate why reloadViews need in FallbackOpObserver::onDelete
- Closed
- is related to
-
SERVER-90268 Investigate potential issues due to lack of ordering in onInitialDataAvailable calls
- Open
- related to
-
SERVER-89942 Avoid calling shard server opobserver during recovery procedures
- Closed
-
SERVER-90682 Keep the in-memory status of views up-to-date without reloading them on every operation in FallbackOpObserver
- Backlog