[WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch

I had a really large logstash.log file which filled up the hdd

I deleted the file and restarted the service now I I get this

Aug 12 16:54:13 SCL-SIEM-01 logstash[3096]: [2019-08-12T16:54:13,127][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-7.3.0-2019.08.12", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x2d9a6298], :response=>{"index"=>{"_index"=>"filebeat-7.3.0-2019.08.12", "_type"=>"_doc", "_id"=>

Is that really the complete error message?

If you filled the disk and elasticsearch is writing indexes to the same disk then this should help.

sorry the full entry is

Aug 13 10:07:30 SCL-SIEM-01 logstash[5784]: [2019-08-13T10:07:30,784][WARN ][log stash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>4 00, :action=>["index", {:_id=>nil, :_index=>"filebeat-7.3.0-2019.08.13", :_type= >"_doc", :routing=>nil}, #LogStash::Event:0x54349926], :response=>{"index"=>{" _index"=>"filebeat-7.3.0-2019.08.13", "_type"=>"_doc", "_id"=>nil, "status"=>400 , "error"=>{"type"=>"validation_exception", "reason"=>"Validation Failed: 1: thi s action would add [2] total shards, but this cluster currently has [1000]/[1000 ] maximum shards open;"}}}}

There is a limit in elasticsearch of 1000 open shards. I expect it could be increased but I am pretty certain that doing so is the wrong approach. This is really an elasticsearch question and you should move it to that forum. The answer will be to reduce the number of shards and folks there should be able to provide guidance.

Agreed many thanks

Hello all

I manged to get around the problem by increasing the shards on the cluster

PUT /_cluster/settings
{
  "persistent": {
"cluster.max_shards_per_node": "1500"
  }
}

But I don't know if this is the correct course of action... Is there a way of merging the shards or reducing the amount of shards open?

Many Thanks