Increase queue size in AWS elasticsearch

I wasn't able to modify queue size in aws elasticsearch

I am getting this error
{
"Message": "Your request: '/_cluster/settings' payload is not allowed."
}

Elasticsearch query:
PUT _cluster/settings
{
"persistent" : {
"threadpool.search.queue_size" : 2000
}

AWS restricts access to a number of APIs and settings. We cannot help you with that sorry.

You would be better off using Elastic Cloud - https://www.elastic.co/cloud/as-a-service

1 Like

Is there any alternatives on how we can configure this?

How come you need to change it in the first place? Are you having a very large number of concurrent search requests or just hitting a lot of shards with each request?

Reason is I'm currently doing some load testing and I encountered this error

{"error":{"root_cause":[{"type":"es_rejected_execution_exception","reason":"rejected execution of org.elasticsearch.transport.TransportService$4@7868ca6f on EsThreadPoolExecutor[index, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@7f817344[Running, pool size = 1, active threads = 1, queued tasks = 203, completed tasks = 668181]]"}],"type":"es_rejected_execution_exception","reason":"rejected execution of org.elasticsearch.transport.TransportService$4@7868ca6f on EsThreadPoolExecutor[index, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@7f817344[Running, pool size = 1, active threads = 1, queued tasks = 203, completed tasks = 668181]]"},"status":429}

Any thoughts on how I can fix this in AWS elasticsearch?

I do not think you can change that on AWS ES service.

Are you having a very large number of concurrent search requests or just hitting a lot of shards with each request?

Yes I think its hitting a lot of shards with each request. I'm trying to simulate the scenario wherein I'm sending 1000 request for every one minute interval.

Is there any failover mechanism that I can use in order to handle this scenario? Any thoughts on where can I improve on or do?

How much data do you have? How large are your shards?

Make sure that you do not too many small shards in your cluster. You can determine the ideal shard size by following the procedure described in this talk. This should allow you to fill up the queue less quickly.

With regards to the data, I'm simulating 1 data per request and I'm triggering 1000 request per minute.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.