Too many shards

Hello,

For one of the kibana queries I have the folowing error:

Error: Request to Elasticsearch failed: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Trying to query 1230 shards, which is over the limit of 1000. This limit exists because querying many shards at the same time can make the job of the coordinating node very CPU and/or memory intensive. It is usually a better idea to have a smaller number of larger shards. Update [action.search.shard_count.limit] to a greater value if you really want to query that many shards at the same time."}],"type":"illegal_argument_exception","reason":"Trying to query 1230 shards, which is over the limit of 1000. This limit exists because querying many shards at the same time can make the job of the coordinating node very CPU and/or memory intensive. It is usually a better idea to have a smaller number of larger shards. Update [action.search.shard_count.limit] to a greater value if you really want to query that many shards at the same time."}}
at http://192.xxx.xx.xx/bundles/commons.bundle.js?v=11107:93:4184
at Function.Promise.try (http://192.xxx.xx.xx/bundles/commons.bundle.js?v=11107:93:13507)
at http://192.xxx.xx.xx/bundles/commons.bundle.js?v=11107:93:12971
at Array.map (native)
at Function.Promise.map (http://192.xxx.xx.xx/bundles/commons.bundle.js?v=11107:93:12926)
at callResponseHandlers (http://192.xxx.xx.xx/bundles/commons.bundle.js?v=11107:93:3796)
at http://192.xxx.xx.xx/bundles/commons.bundle.js?v=11107:92:24284
at processQueue (http://192.xxx.xx.xx/bundles/commons.bundle.js?v=11107:38:23627)
at http://192.xxx.xx.xx/bundles/commons.bundle.js?v=11107:38:23894
at Scope.$eval (http://192.xxx.xx.xx/bundles/commons.bundle.js?v=11107:39:4619)

Where can I set this "action.search.shard_count.limit" parameter?
Is there a way to merge my shards?

Thank you!

Alina GHERMAN

1 Like

You can use the reindex API to copy the contents of multiple indexes into a single destination index, after which you can delete the old indexes.

How have you reached so many shards in the first place? There are probably things to do to prevent this from getting worse.

3 Likes

I have one index a day since january, and since by default there are 5 shards per index... :frowning:

Unless a daily index contains, say, 100 GB data you don't need five shards. A single shard will do fine up to a few tens of gigabytes.

1 Like

What version of elasticsearch has this limit? Is it new in 5.0?

Kimbro

1 Like

Yes, 5.0.0-alpha1

Can someone please help locate the file containing this parameter
action.search.shard_count.limit

I have a single log file of around 3.2 GB which I am trying to parse using a single Elasticsearch node and i get similar error. This is just a test setup and I am using ElasticSearch 5.0 beta version for this setup.

Error: Discover: Trying to query 2051 shards, which is over the limit of 1000. This limit exists because querying many shards at the same time can make the job of the coordinating node very CPU and/or memory intensive. It is usually a better idea to have a smaller number of larger shards. Update [action.search.shard_count.limit] to a greater value if you really want to query that many shards at the same time.

@Hamsaraj, please start a new thread for your unrelated question.

Thank you all for your replies on this topic, they helped me fix my problem. :wink:

If it may help others, I encountered the same issue on a “monitoring” elastic-cluster that received information from production nodes (metricsbeat … ) and others sources; Simply put some automate indices purge with “curator CLI” in place helped me to keep thing straight without modifying the elasticsearch.yml file on my cluster; (understand without having to reboot the service and therefore keep things up and running).