I'm not sure if the functionality I expect is intended, but the results I got are certainly unexpected.
Consider the following schema:
{_id:
, m:<string>}
And a collection, containing about 1600 documents with different values.
I query the collection (trying to get the latest element by "_id.t") with:
db.coll.find().sort(
).limit(1)
It's horribly slow, up to several hundred milliseconds.
When I run explain() on it it says:
{
"cursor" : "BasicCursor",
"nscanned" : 589,
"nscannedObjects" : 589,
"n" : 1,
"scanAndOrder" : true,
"millis" : 113,
"nYields" : 1,
"nChunkSkips" : 0,
"isMultiKey" : false,
"indexOnly" : false,
"indexBounds" : {
}
}
So, it appears to scan the table sequentially.
I have to add index by
, to be able to query the collection fast, then explain returns more satistactory results:
{
"cursor" : "BtreeCursor id.t-1",
"nscanned" : 1,
"nscannedObjects" : 1,
"n" : 1,
"millis" : 0,
"nYields" : 0,
"nChunkSkips" : 0,
"isMultiKey" : false,
"indexOnly" : false,
"indexBounds" : {
"_id.t" : [
[
,
{ "$minElement" : 1 } ]
]
}
}
I thought that "_id" index behaves like the others, so I can query, then sort by its subelements left-to-right. Is this not the case?
P.S. I ruled out query optimizer by using hint({_id:1}) – didn't help, it used btree cursor, but still scanned entire collection.
So, is this something that should be expected (then, I think, documentation should state that this is a special case, since http://www.mongodb.org/display/DOCS/Indexes#Indexes-UsingDocumentsasKeys states otherwise), or is it a bug that should be fixed?