-
Type: Bug
-
Resolution: Works as Designed
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: Sharding
-
None
-
ALL
-
Query 2017-02-13
Hi,
I have a sharded cluster - 2 replicasets - using MongoDB 3.2.10. The database contains a sharded collection called dataSources. When I retrieve all dataSources in batches (pages) of 500 at a time using the skip() and limit() operators, mongoS returns duplicates.
This can be verified using the following script:
var dsIds = {}; var skip = 0; var limit = 500; for (;;) { var count = 0; print("reading page, skip=" + skip); db.dataSources.find().skip(skip).limit(limit).forEach(function(ds) { dsId = ds._id; var oldVal = dsIds[dsId]; if (oldVal != null) { print("duplicate: " + dsId); } dsIds[dsId] = 1; count = count + 1; }) print("read " + count + " docs"); skip = skip + count; if (count < limit) { print("all done"); break; } }
The script prints several lines with "duplicate: ....".
When run against the same database but MongoDB 3.0.4 mongoD + mongoS, the script detects no duplicates.
- related to
-
SERVER-28195 $skip followed by $limit in aggregation resort & lost records when $sort by equal values
- Closed