Shards count issue between logstash and elasticsearch

Hi,

I am trying to write server log files to elasticsearch using logstash. We have 8 servers with logstash running and one single node elasticsearch cluster. All were working well till one week. Currently, logstash is showing this error while trying to write data to elasticsearch.

[2019-09-13T14:10:06,197][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"q2019091308", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x3037fd43], :response=>{"index"=>{"_index"=>"q2019091308", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"validation_exception", "reason"=>"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;"}}}}

Tried increasng the shards size to 2000 in elasticsearch configuration and restarted the service, but still showing the same error.

cluster.max_shards_per_node: 2000

Can some one guide me how to resolve this issue.

If you only have a single Elasticsearch node you should look to dramatically reduce the number of shards in the cluster rather than update this setting. You can start by ensuring that you have the number of replica shards set to 0, but should also change your sharding strategy e.g. by going from daily to monthly indices or reducing the number of primary shards per index.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.