-
Type: Bug
-
Resolution: Unresolved
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
None
-
RSS Sydney
-
Execution Team 2024-05-13, Execution Team 2024-05-27, Execution Team 2024-06-10, Execution Team 2024-06-24, PopcornChicken - 2024-09-17, MorningKaraoke 2024-10-01, BananaDuck - 2024-10-15, CookieFloss - 29/10/24, Party@Gregs - 2024-11-12, TeamTummy - 2024-11-26
Right now, we allow double maxValidateMemoryUsageMB during the second phase of validation. It appears like we use double the memory limit but we actually double count the number of index keys in the first phase. If a user sets maxValidateMemoryUsage to their max available memory like in atlas, validation can OOM because our limit calculation undercounts actual memory usage/ignores other metadata for each key. We should respect maxValidateMemoryUsageMB as the memory limit and consider doubling the server default.
We want to consider accounting for other metadata that is stored along with each key during validation when calculating the estimated memory usage.
- split from
-
SERVER-93766 Refactor Validation code
- Investigating