-
Type: Bug
-
Resolution: Done
-
Priority: Major - P3
-
Affects Version/s: 3.0.16, 3.4.18, 3.6.9, 4.0.5
-
Component/s: Sharding
-
Sharding EMEA
-
Fully Compatible
-
(copied to CRM)
ISSUE SUMMARY
When dropping a database / collection in a sharded cluster, even if the drop is reported as successful it is possible the database / collection may still be present in some nodes in the cluster. In MongoDB 4.2 and later, rerunning the drop command should clean up the data. In MongoDB 4.0 and earlier, we do not recommend that users drop a database or collection and then attempt to reuse the namespace.
USER IMPACT
When the database/collection is not successfully dropped in a given node, the corresponding files continue to use disk space in that node. Attempting to reuse the namespace may lead to undefined behavior.
WORKAROUNDS
To work around this issue one can follow the steps below to drop a database/collection in a sharded environment.
MongoDB 4.4:
- Drop the database / collection using a mongos
- Rerun the drop command using a mongos
MongoDB 4.2:
- Drop the database / collection using a mongos
- Rerun the drop command using a mongos
- Connect to each mongos and run flushRouterConfig
MongoDB 4.0 and earlier:
- Drop the database / collection using a mongos
- Connect to each shard's primary and verify the namespace has been dropped. If it has not, please drop it. Dropping a database (e.g db.dropDatabase()) removes the data files on disk for the database being dropped.
- Connect to a mongos, switch to the config database and remove any reference to the removed namespace from the collections chunks, locks, databases and collections:
When dropping a database:
use config db.collections.remove( { _id: /^DATABASE\./ }, {writeConcern: {w: 'majority' }} ) db.databases.remove( { _id: "DATABASE" }, {writeConcern: {w: 'majority' }} ) db.chunks.remove( { ns: /^DATABASE\./ }, {writeConcern: {w: 'majority' }} ) db.tags.remove( { ns: /^DATABASE\./ }, {writeConcern: {w: 'majority' }} ) db.locks.remove( { _id: /^DATABASE\./ }, {writeConcern: {w: 'majority' }} )
When dropping a collection:use config db.collections.remove( { _id: "DATABASE.COLLECTION" }, {writeConcern: {w: 'majority' }} ) db.chunks.remove( { ns: "DATABASE.COLLECTION" }, {writeConcern: {w: 'majority' }} ) db.tags.remove( { ns: "DATABASE.COLLECTION" }, {writeConcern: {w: 'majority' }} ) db.locks.remove( { _id: "DATABASE.COLLECTION" }, {writeConcern: {w: 'majority' }} )
- Connect to the primary of each shard, remove any reference to the removed namespace from the collections cache.databases, cache.collections and cache.chunks.DATABASE.COLLECTION:
When dropping a database:
db.getSiblingDB("config").cache.databases.remove({_id:"DATABASE"}, {writeConcern: {w: 'majority' }}); db.getSiblingDB("config").cache.collections.remove({_id:/^DATABASE.*/}, {writeConcern: {w: 'majority' }}); db.getSiblingDB("config").getCollectionNames().forEach(function(y) { if(y.indexOf("cache.chunks.DATABASE.") == 0) db.getSiblingDB("config").getCollection(y).drop() })
When dropping a collection:db.getSiblingDB("config").cache.collections.remove({_id:"DATABASE.COLLECTION"}, {writeConcern: {w: 'majority' }}); db.getSiblingDB("config").getCollection("cache.chunks.DATABASE.COLLECTION").drop()
- Connect to each mongos and run flushRouterConfig
- is duplicated by
-
SERVER-2782 no rollback of chunking if chunking fails (e.g. for large collections)
- Closed
-
SERVER-16836 Cluster can create the same unsharded collection on more than one shard
- Closed
-
SERVER-17884 Can't drop a database in sharded environment
- Closed
-
SERVER-19603 Non-Sharded database is present in two different shards (with different data)
- Closed
-
SERVER-21866 couldn't find database [aaa] in config db
- Closed
-
SERVER-39167 Concurrent insert into nonexistent database often fails with "database not found" on 4.1.7
- Closed
-
SERVER-5521 Better user feedback when drop command was not executed successfully on all shards
- Closed
- is related to
-
SERVER-32716 Dropping sharded database or collection leaves orphaned zone documents
- Closed
-
SERVER-47372 config.cache collections can remain even after collection has been dropped
- Closed
-
MONGOID-4826 Support hashed shard key declarations, add rake task to shard collections
- Closed
- related to
-
SERVER-72797 Remove sharding exceptions from invalidated_cursors FSM
- Closed
-
SERVER-33973 Force cleanup of possibly remaining partial data (from failed collection/database drop) when rerunning dropCollection command
- Closed
-
SERVER-19811 Disable FSM workloads that drop and reuse sharded namespaces
- Closed
-
SERVER-14678 Cleanup leftover collection metadata after drop
- Closed