There are 1 million geo-data in my database and 20% of them are [-1.0,-1.0]
I run the simple query db.test.find({lc:{$near:[-1.0,-1.0]}}).limit(1), which is extremely slow. It costs about 20~30 seconds and I think it is because the data will be sorted with $near operation first and then return 1 doc even if all of their locations are the same.
The CPU usage would definitely rises up to 100%. Then the mongo hangs, all the other query has to wait the completion of the geo query.
The problem is likely to: https://jira.mongodb.org/browse/SERVER-8207
- duplicates
-
SERVER-5800 Refactor 2D $geoWithin into new query framework (expression index)
- Closed