I am using MongoDB 1.6.3, to store a big collection (300k+ records). I added a composite index.
db['collection_name'].getIndexes()
[
{
"name" : "_id_",
"ns" : "db_name.event_logs",
"key" : {
"_id" : 1
}
},
{
"key" : {
"updated_at.t" : -1,
"community_id" : 1
},
"ns" : "db_name.event_logs",
"background" : true,
"name" : "updated_at.t_-1_community_id_1"
}
]
However, when I try to run this code:
db['collection_name']
.find({:community_id => 1})
.sort(['updated_at.t', -1])
.skip(@skip)
.limit(@limit)
I am getting:
Mongo::OperationFailure (too much data for sort() with no index. add an index or specify a smaller limit)
What am I doing wrong?
Even with an index, I think you can still get that error if your result set exceeds 4MB.
You can see the size by going into the mongodb console and doing this:
I ended up with results like this:
Having a cursor batch size that is too large will cause this error. Setting the batch size does not limit the amount of data you can process, it just limits how much data is brought back from the database. When you iterate through and hit the batch limit, the process will make another trip to the database.
So it "feels" like you're using the index, but the index is actually a composite index. I'm not sure that the sort is "smart enough" to use only the partial index.
So two problems:
updated_at.t
sounds like a field on which you'll do range queries. Indexes work better if the range query is the second bit.community_id => 1
? If the number is not big, you may be able to get away with just sorting without an index.So you may have to switch the index around and you may have to change the sort to use both
community_id
andupdated_at.t
. I know it seems redundant, but start there and check the Google Groups if it's still not working.Try adding
{community_id: 1, 'updated_at.t': -1}
index. It needs to search bycommunity_id
first and then sort.