From the server selection spec section on Topology types: ReplicaSetWithPrimary or ReplicaSetNoPrimary: Read Operations:
For all read preferences modes except 'primary', clients MUST set the slaveOK wire protocol flag to ensure that any suitable server can handle the request. Clients MUST NOT set the slaveOK wire protocol flag if the read preference mode is 'primary'.
As mentioned in the code review for CDRIVER-1872, mongoc_cursor_set_hint() currently sets the slaveOK bit even querying a primary, which conflicts with the spec.
Fixing this means that users will now receive a "not master" error if libmongoc thinks the node is a primary, omits the slaveOk bit, and the server is actually in a non-primary state. That said, how would a host transition from a primary to non-primary state without dropping its connections? Would this "not master" edge case even happen in practice, or is this just a theoretical?
As an aside, I'm not familiar with the reasoning behind the spec's rule on not setting slaveOK for primary queries. Apart from mongos, which infers a read preference behavior based on slaveOK, the bit seems like an implementation detail. Would it be worthwhile to revise the spec to simply allow drivers to specify slaveOK as they wish for non-mongos connections?
- is related to
-
CDRIVER-1872 mongoc_cursor_set_hint causes secondary reads in sharded cluster
- Closed