-
Type: Bug
-
Resolution: Done
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
ALL
Would it be possible to increase performance for pagination with indexes for large collections ?
Consider a collection with 200000 documents, paginated with $skip and $limit and indexed on the sort order.
The first page takes a few milliseconds, but the last page can take a few seconds because the index is scanned from the start counting the $skip.
What if, as an optimization, the query planner would check if $skip is greater than half the amount of indexed documents and if it is: go through the index in reverse and shift the results into the array instead of pushing them.
This would result in never having to scan more than half the index.