Hello All
Starting logstash gave this error:
Aug 13 10:07:30 SCL-SIEM-01 logstash[5784]: [2019-08-13T10:07:30,784][WARN ][log stash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>4 00, :action=>["index", {:_id=>nil, :_index=>"filebeat-7.3.0-2019.08.13", :_type= >"_doc", :routing=>nil}, #LogStash::Event:0x54349926], :response=>{"index"=>{" _index"=>"filebeat-7.3.0-2019.08.13", "_type"=>"_doc", "_id"=>nil, "status"=>400 , "error"=>{"type"=>"validation_exception", "reason"=>"Validation Failed: 1: thi s action would add [2] total shards, but this cluster currently has [1000]/[1000 ] maximum shards open;"}}}}
so I ran this command from Kibana to get around this problem:
PUT /_cluster/settings
{
"persistent": {
"cluster.max_shards_per_node": "1500"
}
}
Is increasing cluster max shards is the best way of managing indies ?