Logstash maximum indexes

I saw that today's data is missing in kibana. After investigating I noticed that today's index doesn't exist (I create an index everyday. Today's index should be filebeat-7.9.0-2020.09.14)

This is what logstash has to say:

[2020-09-14T16:25:15,547][WARN ][logstash.outputs.elasticsearch][main][a29a4184179dd47220949906feec16011fdc502307795a338054c74d48168ab8] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-7.9.0-2020.09.14", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x7f043384>], :response=>{"index"=>{"_index"=>"filebeat-7.9.0-2020.09.14", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"validation_exception", "reason"=>"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}}}}

Does it mean that I have too many indexes? If so, can I increase number of indexes? I don't think it should have that many open.

Thanks ahead!

This blog post talks about the number of shards in a cluster.

The limit of 1000 can be changed, but read that blog to understand why the limit is there by default.

1 Like