The Error below shows a hard limit of 1000 shards set for 7.0.1
unexpected error while indexing monitoring document
org.elasticsearch.xpack.monitoring.exporter.ExportException: [.monitoring-es-7-2019.05.09] IndexCreationException[failed to create index [.monitoring-es-7-2019.05.09]]; nested: ValidationException[Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [5044]/[1000] maximum shards open;];
The limit you are hitting is cluster.max_shards_per_node. I think it's right, you probably should try and reduce the number of shards in this cluster. Here is an article with more info:
Thank you @DavidTurner for the replay,
Would you have any recommendations on how to start up the 7.0.1 node so I can manage the cluster? This is not a production environment and I was attempting to move up to 7.0 for testing. My problem is I can't get Kibana up and running to get in an manage the API due to Kibana needing to create new index templates. I understand I could go the route of sending Curl commands but I'd rather manage it from the UI.
Thank you - Cody
Yes, as a temporary measure, until you get the number of shards under control, you can add cluster.max_shards_per_node: 5100 to your elasticsearch.yml config file.
@DavidTurner
Interesting, I had tried that already and was still getting the same alarm
I can't seem to figure out how to set that cluster.max_shards_per_node: 5100
I have tried cluster.max_shards_per_node: 5100 in elasticsearch.yml
and $ES_HOME\bin\elasticsearch -Ecluster.max_shards_per_node=5100 with no luck (;_;)
Ok, confusingly this setting is only a dynamic setting right now, so the values from elasticsearch.yml and the command line are both ignored. I think this is a bug and opened #42137.
In the meantime you can set it dynamically on a running Elasticsearch cluster with the following curl command
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.