I had a really large logstash.log file which filled up the hdd
I deleted the file and restarted the service now I I get this
Aug 12 16:54:13 SCL-SIEM-01 logstash[3096]: [2019-08-12T16:54:13,127][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-7.3.0-2019.08.12", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x2d9a6298], :response=>{"index"=>{"_index"=>"filebeat-7.3.0-2019.08.12", "_type"=>"_doc", "_id"=>
Aug 13 10:07:30 SCL-SIEM-01 logstash[5784]: [2019-08-13T10:07:30,784][WARN ][log stash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>4 00, :action=>["index", {:_id=>nil, :_index=>"filebeat-7.3.0-2019.08.13", :_type= >"_doc", :routing=>nil}, #LogStash::Event:0x54349926], :response=>{"index"=>{" _index"=>"filebeat-7.3.0-2019.08.13", "_type"=>"_doc", "_id"=>nil, "status"=>400 , "error"=>{"type"=>"validation_exception", "reason"=>"Validation Failed: 1: thi s action would add [2] total shards, but this cluster currently has [1000]/[1000 ] maximum shards open;"}}}}
There is a limit in elasticsearch of 1000 open shards. I expect it could be increased but I am pretty certain that doing so is the wrong approach. This is really an elasticsearch question and you should move it to that forum. The answer will be to reduce the number of shards and folks there should be able to provide guidance.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.