-
Type: Bug
-
Resolution: Unresolved
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
None
-
Query Execution
-
ALL
-
QE 2024-01-22, QE 2024-02-05, QE 2024-02-19, QE 2024-03-04, QE 2024-03-18, QE 2024-04-01, QE 2024-04-15, QE 2024-04-29, QE 2024-05-13, QE 2024-05-27, QE 2024-06-10, QE 2024-06-24, QE 2024-07-08, QE 2024-07-22, QE 2024-08-05, QE 2024-08-19, QE 2024-09-02, QE 2024-09-16, QE 2024-09-30, QE 2024-10-14, QE 2024-10-28, QE 2024-11-11, QE 2024-11-25
-
(copied to CRM)
(This ticket is related to support case https://support.mongodb.com/case/01207868)
In our Ruby/Mongoid-based application, we set an application-wide socketTimeout value of 30 seconds. In MongoDB 6.x and prior, this was a reliable way that slow queries would be killed when the socket closed.
Since upgrading to MongoDB 7.0.2 from 6.0.11, we have noticed many long-running queries are not killed in this manner, and instead execute indefinitely.
The MongoDB team has suggested that this may be due to the new SBE in MongoDB 7.x “yielding less frequently” to evaluate the timeout condition.
Moreover, we suspect there is a “death spiral” effect whereby non-killed long running queries cause less yields, causing more queries to escape the timeout kill, and so-on in a positive feedback loop. (Just a suspicion; no hard evidence for this.)