Hi Guys,
Kibana 7.7.0
I would be, so grateful for advising in shaping proper configuration for Elastic. Logs are shipped through fluentd.conf. Out of sudden, stopped to fetch logs. Instance is clustered and hosted on AWS.
Below is fluentd - buffer config. I am thinking if have did not make mistake because have not implemented yet "ilm".
Considering to put as specified here
enable_ilm true
ilm_policy_id "fluentd-ilm"
ilm_policy_overwrite false
ilm_policy {"policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_age":"7d","max_size":"50gb"}}},"delete":{"min_age":"1d","actions":{"delete":{}}}}}}
Would make sense for you ?
<buffer tag, time>
timekey 10
timekey_wait 10
retry_type exponential_backoff
flush_mode interval
flush_interval 5s
flush_thread_count 3
retry_forever
retry_max_interval 30
chunk_limit_size 2M
total_limit_size 500M
overflow_action block
2022-03-15 xx:xx:xx+0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch [error type]: illegal_argument_exception [reason]: 'Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [2999]/[3000] maximum shards open;'" location=nil tag="kube.services-x.job-mapper" .. "stream"=>"stderr", "docker"=>{"container_id"=>"1111111"}, "kubernetes"=>{"container_name"=>"job-mapper", "namespace_name"=>"services-1", "pod_name"=>"xxx-xxx", "container_image"=>"xx/xx/job-xx:2022.03.01-000000", "container_image_id"=>"docker-pullable://xx/xx/job-mapper@sha256:xxx", "pod_id"=>"123456", "host"=>"ip-00.xyz", "labels"=>{"x"=>"x", "x"=>"job-mapper"}, "namespace_labels"=>{"cluster"=>"xtyz123", "ns"=>"services-1"}}, "container_info"=>"1111111-stderr"}
Validation pointing out for shards which utilized max opened. I was reading a lot and currently have no clue, which way would be appropriate. Either setting up ILM or different with modification on dev tools?