How to increase shard size and limit


I have setup elastic search on my server with default settings (single node and no clusters). I have enabled logstash to read my application logs and then I am trying to run aggregation query on top of indexed logs. It used to work for quite some time but recently I am facing the issue with below message:

"error": {
"root_cause": [
"type": "illegal_argument_exception",
"reason": "Trying to query 1036 shards, which is over the limit of 1000. This limit exists because querying many shards at the same time can make the job of the coordinating node very CPU and/or memory intensive. It is usually a better idea to have a smaller number of larger shards. Update [] to a greater value if you really want to query that many shards at the same time."

I am no expert but after following through elastic search community, I did some analysis:

  1. My Shards are having very less documents, its like too many shards with less data.
  2. I have good amount of disk space approx 232 gb available out of 245.9 gb and just 5% space is consumed by the shards (checked it through /_cat/allocation?v)

Now looks like I need to increase my shard size, can anyone suggest me and guide me how to achieve this in elastic search version 5.1.2


May I suggest you look at the following resources about sizing:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.