How come you need to change it in the first place? Are you having a very large number of concurrent search requests or just hitting a lot of shards with each request?
Yes I think its hitting a lot of shards with each request. I'm trying to simulate the scenario wherein I'm sending 1000 request for every one minute interval.
Is there any failover mechanism that I can use in order to handle this scenario? Any thoughts on where can I improve on or do?
How much data do you have? How large are your shards?
Make sure that you do not too many small shards in your cluster. You can determine the ideal shard size by following the procedure described in this talk. This should allow you to fill up the queue less quickly.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.