Is increasing cluster max shards is the best way of managing indies

Hello All

Starting logstash gave this error:

Aug 13 10:07:30 SCL-SIEM-01 logstash[5784]: [2019-08-13T10:07:30,784][WARN ][log stash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>4 00, :action=>["index", {:_id=>nil, :_index=>"filebeat-7.3.0-2019.08.13", :_type= >"_doc", :routing=>nil}, #LogStash::Event:0x54349926], :response=>{"index"=>{" _index"=>"filebeat-7.3.0-2019.08.13", "_type"=>"_doc", "_id"=>nil, "status"=>400 , "error"=>{"type"=>"validation_exception", "reason"=>"Validation Failed: 1: thi s action would add [2] total shards, but this cluster currently has [1000]/[1000 ] maximum shards open;"}}}}

so I ran this command from Kibana to get around this problem:

PUT /_cluster/settings
{
  "persistent": {
"cluster.max_shards_per_node": "1500"
  }
}

Is increasing cluster max shards is the best way of managing indies ?

Why do you have so many shards per node? Having lots of small indices and shards can be very inefficient, so the limit is there to guard against that and prevent you from running into problems. If you have lots of small shards, e.g. due to a long retention period, I would recommend switching from daily to weekly or monthly indices, potentially with a single primary shard. Please read this blog post for further guidance.

Thanks Christian,

Yes we do have a long retention period as cluster is collecting system logs ... i wil have a read of the blog post and report back.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.