Can someone please help locate the file containing this parameter
action.search.shard_count.limit
I have a single log file of around 3.2 GB which I am trying to parse using a single Elasticsearch node and i get similar error. This is just a test setup and I am using ElasticSearch 5.0 beta version for this setup.
Error: Discover: Trying to query 2051 shards, which is over the limit of 1000. This limit exists because querying many shards at the same time can make the job of the coordinating node very CPU and/or memory intensive. It is usually a better idea to have a smaller number of larger shards. Update [action.search.shard_count.limit] to a greater value if you really want to query that many shards at the same time.
Thank you all for your replies on this topic, they helped me fix my problem.
If it may help others, I encountered the same issue on a “monitoring” elastic-cluster that received information from production nodes (metricsbeat … ) and others sources; Simply put some automate indices purge with “curator CLI” in place helped me to keep thing straight without modifying the elasticsearch.yml file on my cluster; (understand without having to reboot the service and therefore keep things up and running).
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.