ERROR: this cluster currently has [x]/[x] maximum normal shards open

This example error may return from Elasticsearch to any ingestor (e.g. Logstash, Bulk, API, Beats, etc):

[{TIMESTAMP}][WARN][logstash.outputs.elasticsearch][x][y] Could not index event to Elasticsearch. {
	:status=> 400, 
	:action=> [ "index", {
		:_id=> "z", 
		:_index=> "x", 
		:routing=> nil, 
		:_type=> "_doc"
	}, #<LogStash::Event:#>], 
		"index"=> { 
			"_index"=> "x", 
			"_type"=> "_doc", 
			"_id"=> "z", 
			"status"=> 400, 
			"error"=> { 
				"type"=> "illegal_argument_exception", 
-				"reason"=> "Validation Failed: 1: this action would add [#] shards, but this cluster currently has [####]/[####] maximum normal shards open;" }}}}

"Maximum normal shards open " indicates that your target Elasticsearch cluster has hit its maximum shard limit as calculated by cluster.max_shards_per_node * number_of_data_nodes .

You may consider creating temporary breathing room for the cluster by removing unused indices or (less recommended) short-term overriding this setting.

Once your cluster has breathing room, kindly note it sounds like its oversharded and it may be time to scale out or use more rigorous ILM.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.