I use percolator(Elasticsearch 2.3.3) and i have ~100 term queries. When i percolate 1 document in 1 thread, it took ~500ms:
{u'total': 0, u'took': 452, u'_shards': {u'successful': 12, u'failed': 0, u'total': 12}} TIME 0.467885982513
There are 4 CPU, so i want to percolate in 4 processes. But when i launch them, everyone took ~2000ms:
{u'total': 0, u'took': 1837, u'_shards': {u'successful': 12, u'failed': 0, u'total': 12}} TIME 1.890885982513
Why?
I use python module Elasticsearch 2.3.0. I have tried to manage count of shards(from 1 to 12), but it is the same result.
When i try to percolate in 20 thread, elastic crushes with error:
RemoteTransportException[[test_node01][192.168.69.142:9300][indices:data/read/percolate[s]]]; nested: EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.TransportService$4@7906d a8a on EsThreadPoolExecutor[percolate, queue capacity = 1000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@31a1c278[Running, pool size = 16, active threads = 16, queued tasks = 1000, compl eted tasks = 156823]]]; Caused by: EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.TransportService$4@7906da8a on EsThreadPoolExecutor[percolate, queue capacity = 1000, org.elasticsearch.common.util .concurrent.EsThreadPoolExecutor@31a1c278[Running, pool size = 16, active threads = 16, queued tasks = 1000, completed tasks = 156823]]]
Server has 16 CPU and 32 GB RAM