-
Type: Bug
-
Resolution: Incomplete
-
Priority: Major - P3
-
None
-
Affects Version/s: 4.0.27
-
Component/s: None
-
ALL
-
Sharding 2022-02-21, Sharding 2022-03-07, Sharding NYC 2022-03-21, Sharding NYC 2022-04-04, Sharding NYC 2022-04-18, Sharding 2022-05-02, Sharding NYC 2022-05-16, Sharding NYC 2022-05-30, Sharding NYC 2022-06-13, Sharding 2022-06-27, Sharding 2022-07-11, Sharding 2022-07-25, Sharding 2022-08-08, Sharding 2022-08-22, Sharding 2022-09-05, Sharding 2022-09-19, Sharding 2022-10-03, Sharding 2022-10-17, Sharding NYC 2022-10-31, Sharding NYC 2022-11-14, Sharding NYC 2022-11-28, Sharding 2022-12-12, Sharding NYC 2022-12-26, Sharding NYC 2023-01-09, Sharding NYC 2023-01-23, Sharding NYC 2023-02-06, Sharding NYC 2023-02-20, Sharding NYC 2023-03-06, Sharding NYC 2023-03-20, Sharding NYC 2023-04-03, Sharding NYC 2023-04-17, Sharding NYC 2023-05-01, Sharding NYC 2023-05-15
-
5
We are running a 3 shard cluster with the latest mongodb v4 (v4.0.27). In the logs of one shard we see constantly that chunks from the config.systen.session collection can't be moved due to different collection UUID's:
2021-09-17T09:52:36.600+0200 W SHARDING [conn350] Chunk move failed :: caused by :: OperationFailed: Data transfer error: migrate failed: InvalidUUID: Cannot create collection config.system.sessions because we already have an identically named collection with UUID cfebe605-b114-47d7-a490-691a3668a2d7, which differs from the donor's UUID b7fa2327-0b02-40fe-bf5b-1c1e0e32fefb. Manually drop the collection on this shard if it contains data from a previous incarnation of config.system.sessions
The log suggests to drop the collection but mongodb's documentation clearly states not to drop collections from the config database.
We cannot even execute find on this collection because the user is not authorized:
offerStoreIT01:PRIMARY> db.system.sessions.find() Error: error: { "operationTime" : Timestamp(1631867031, 157), "ok" : 0, "errmsg" : "not authorized on config to execute command { find: \"system.sessions\", filter: {}, lsid: { id: UUID(\"fd897f41-b167-4ee4-82ca-e037d2b7bada\") }, $clusterTime: { clusterTime: Timestamp(1631866971, 1435), signature: { hash: BinData(0, FC3E1B6F3AC0342900DEC1131A12EC3191387A9F), keyId: 6948315370398680269 } }, $db: \"config\" }", "code" : 13,
However, the user has the following roles:
"roles" : [ { "role" : "clusterManager", "db" : "admin" }, { "role" : "restore", "db" : "admin" }, { "role" : "clusterMonitor", "db" : "admin" }, { "role" : "dbAdmin", "db" : "admin" }, { "role" : "readAnyDatabase", "db" : "admin" }, { "role" : "userAdminAnyDatabase", "db" : "admin" }, { "role" : "hostManager", "db" : "admin" }, { "role" : "dbAdminAnyDatabase", "db" : "admin" }, { "role" : "readWriteAnyDatabase", "db" : "admin" }, { "role" : "userAdmin", "db" : "admin" }, { "role" : "readWrite", "db" : "admin" }, { "role" : "clusterAdmin", "db" : "admin" }, { "role" : "root", "db" : "admin" } ]
Obviously the suggestion from the log file to drop the collection seems not to work and also contradicts mongodb's documentation.
So how to fix this UUID issue?
- related to
-
SERVER-63592 Investigate how a sharded cluster can end up with config.system.sessions collection with different uuids
- Backlog