Below is the error message from logstash.
The Elasticsearch is designed with hot, warm and cold nodes. Index is created on daily basis.
[2024-10-29T11:14:22,632][INFO ][logstash.outputs.elasticsearch][main][1dde6b9e90686623464913a83aa66d3d68284a23f9523852fe779664b16dd7a1] Retrying failed action {:status=>429, :action=>["index", {:_id=>"717162ethbnbxmansksdjioqdjalmx,amaksdjaknd11", :_index=>"western2-2024.10.18", :routing=>nil}, {"container"=>{"name"=>"service", "id"=>"ape-service-1.2.500-54c4788d55-lnmwb"}, "path"=>"/nfs/logs/ape2/nfs.service+ape-service-1.2.500-54c4788d55-lnmwb", "type"=>"ape2-stats", "labels"=>{"direction"=>"read", "eventglobalid"=>"unknown", "apeeventid"=>19134, "failed"=>"no", "apeseverity"=>"MAJOR", "EventClassID"=>"EventStats", "autoid"=>29928, "autoguid"=>"c174f5c8-6a75-44f4-8e57-0de13a180581", "Severity"=>"Medium", "tenant"=>"12345jajnammm", "source"=>"["dxl://kafka-prod-usw-1a-0.kafka-dev-heeadless.svc.cluster.local:9009,kafka-prod-usw-1a-1.kafka-dev-heeadless.svc.cluster.local:9009,kafka-prod-usw-1a-2.kafka-dev-heeadless.svc.cluster.local:9009/ape.incident.raw/group0/0"]", "Description"=>"Statistical Information Per Event"}, "@version"=>"1", "log"=>{"level"=>"Information"}, "host"=>"logstash-service-49fk8", "fingerprint"=>"717162ethbnbxmansksdjioqdjalmx,amaksdjaknd11", "@timestamp"=>2024-10-18T00:00:50.061Z}], :error=>{"type"=>"cluster_block_exception", "reason"=>"index [western2-2024.10.18] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];"}}
Below is the index name western2-2024.10.18
why is logstash trying to write to older index western2-2024.10.18. How does it know that it should write events index.
please help.