-
Type: Bug
-
Resolution: Fixed
-
Priority: Major - P3
-
Affects Version/s: 3.6.4
-
Component/s: mongorestore
-
None
-
(copied to CRM)
We made a dump with ``–oplog`` from a properly working system, but when restoring it we constantly faced `E11000 duplicate key` errors.
The duplicate key obviously doesn't exist on the dumped database as it has an unique index, but as the dump takes hours to complete, it looks like that data that was deleted ends up being into the dump and data that was created after delation while the dump is running ends up being there too.
This is the kind of situation that ``–oplogReplay`` should address, but it looks like the oplog is replayed after the indexes were recreated and thus the restore fails while recreating the indexes as it faces a state where duplicate data is present.
- is depended on by
-
TOOLS-176 Dump/Restore with --oplog not point-in-time
- Closed
- is duplicated by
-
TOOLS-176 Dump/Restore with --oplog not point-in-time
- Closed
- is related to
-
TOOLS-176 Dump/Restore with --oplog not point-in-time
- Closed
-
TOOLS-2838 Disallow use of --oplogReplay without --drop
- Accepted
-
TOOLS-2226 Recover from duplicate key errors when restoring with oplog
- Closed
- related to
-
TOOLS-1385 Mongorestore should allow doing all index builds only at the end
- Closed
- links to