Elastsicsearch: Odd Behaviour on Index Deletion

I'm running elasticsearch 6.8.6 and am seeing this odd behaviour on deleting an old filebeat index. Here is the log from the master node which shows what's happening:

[2020-06-19T07:06:30,899][INFO ][o.e.c.m.MetaDataDeleteIndexService] [es-master-001] [filebeat-6.8.6-2020.04.20/IHIZKPT6SRCJDQeJVC1jzg] deleting index
[2020-06-19T07:06:38,645][INFO ][o.e.c.m.MetaDataCreateIndexService] [es-master-001] [filebeat-6.8.6-2020.04.20] creating index, cause [auto(bulk api)], templates [filebeat-6.8.6], shards [3]/[1], mappings [doc]
[2020-06-19T07:06:39,081][INFO ][o.e.c.r.a.AllocationService] [es-master-001] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[filebeat-6.8.6-2020.04.20][2], [filebeat-6.8.6-2020.04.20][0]] ...]).
[2020-06-19T07:06:39,128][INFO ][o.e.c.m.MetaDataMappingService] [es-master-001] [filebeat-6.8.6-2020.04.20/bXCnP4_VRxqmZf4Lag0n1Q] update_mapping [doc]

As you can see, the index is being deleted and then reinstated. Even though the index is very old. Also, it is repopulated with the same data. Is this some kind of cluster or shard state issue? How do I gain insight and fix it?


I've tried restarting all of the data nodes but that hasn't fixed it.

Would appreciate some feedback on this topic? Index still does not delete, even after a full cluster restart.

How is your cluster configured? Have you got minimum_master-nodes set correctly?

minimum_master_nodes is set to 2. We run 3...

Any chance of a response from an elastic team member on this please?

There is a client indexing into the filebeat-6.8.6-2020.04.20 index which creates the index if it does not exist.

I don't think there's anything wrong with Elasticsearch here, you'll need to track down that client.

Seems there was data being cached by the logstashes. Had to manually clear down persistent queues to fix it.