I am running Elastic stack v7.2 in Azure Kubernetes for logging and monitoring of containerized applications. I have 3 master + 2 data node configuration. I am using ES curator which is scheduled to run once daily to delete the indices which are older than 4 days. Even though the ES Curator successfully deletes the older indices but they do came back. How is this happening? Please see the error logs below from the ES server
{"type": "server", "timestamp": "2020-04-15T21:41:16,415+0000", "level": "INFO", "component": "o.e.i.IndexingMemoryController", "cluster.name": "es-eic-logs", "node.name": "elasticsearch-data-0", "cluster.uuid": "dTg5I7svTnCy1_eOkXw7gw", "node.id": "EoTA9f6bSNGDTZQ5T3oQfw", "message": "now throttling indexing for shard [[filebeat-k8-7.2.0-2019.10.17][0]]: segment writing can't keep up" }
You see that the error message is for filebeat index which was supposed to be created on 17-Oct-2019 and it was supposed to be deleted by 21 Oct 2019. Even when I manually delete the indices they do come back after a short interval. Can some one help in understanding how these indices are coming up? I am not explicitly POSTing data to any of the back dated indices nor should the applications be doing it.
Below is the response from cluster health API for my cluster
{
"cluster_name" : "es-eic-logs",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 2,
"active_primary_shards" : 114,
"active_shards" : 228,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}