Is there a way that I can protect my app against slow queries in MongoDB? My application has tons of possibilities of filters and I'm monitoring all these queries but at the same time I don't want to compromise performance because of a missing index definition.
相关问题
- MongoDB can not create unique sparse index (duplic
- Faster loop: foreach vs some (performance of jsper
- Spring Data MongoDB - lazy access to some fields
- Why wrapping a function into a lambda potentially
- Golang mongodb aggregation
相关文章
- mongodb有没有什么办法禁止读取数据的时候进行缓存
- mongodb-aggregate聚合查询分组后如何获得多字段
- mongodb error: how do I make sure that your journa
- How to track MongoDB requests from a console appli
- DOM penalty of using html attributes
- Which is faster, pointer access or reference acces
- MongoError: cannot infer query fields to set, path
- Django is sooo slow? errno 32 broken pipe? dcramer
I guess there is currently no support for killing query by passing time argument. Though in your development side, you can set profiler level to 2. It will log every query that has been issued. From there you can see which queries take how much time. I know its not what you exactly wanted but it helps in getting the insight of what all queries are fat and then in your app logic you can have some way to gracefully handle such cases where those queries might originate. I usually go by this approach and it helps.
There are options available on the client side (maxTimeMS starting in 2.6 release).
On the server side, there is no appealing global option, because it would impact all databases and all operations, even ones that the system needs to be long running for internal operation (for example tailing the oplog for replication). In addition, it may be okay for some of your queries to be long running by design.
The correct way to solve this would be to monitor currently running queries via a script and kill the ones that are long running and user/client initiated - you can then build in exceptions for queries that are long running by design, or have different thresholds for different queries/collections/etc.
You can then use db.currentOp() method (in the shell) to see all currently running operations. The field "secs_running" indicates how long the operation has been running. Be careful not to kill any long running operations that are not initiated by your application/client - it may be a necessary system operation, like chunk migration in a sharded cluster (as just one example).
Right now with version
2.6
this is possible. In their press release you can see the following:Therefore with MaxTimeMS you can specify how much time you allow your query to be executed. For example I do not want a specific query to run more than 200 ms.
What is cool about this, is that you can specify different timeouts for different operations.
To answer OP's question in the comment. There is not global setting for this. One reason is that different queries can have different maximum tolerating time. For example you can have query that finds userInfo by it's ID. This is very common operation and should run super fast (otherwise we are doing something wrong). So we can not tolerate it to run longer than 200 ms.
But we also have some aggregation query, which we run once a day. For this operation it is ok to run for 4 seconds. But we can not tolerate it longer than 10 seconds. So we can put 10000 as maxTimeMS.
The 'notablescan' option, as @ghik mentioned, will prevent you from running queries that are slow due to not using an index. However, that option is global to the server, and it is not appropriate for use in a production environment. It also won't protect you from any other source of slow queries besides table scans.
Unfortunately, I don't think there is a way to directly do what you want right now. There is a JIRA ticket proposing the addition of a $maxTime or $maxScan query parameter, which sounds like it would help you, so please vote for it: https://jira.mongodb.org/browse/SERVER-2212.