I'm running elasticsearch 6.8.6 and am seeing this odd behaviour on deleting an old filebeat index. Here is the log from the master node which shows what's happening:
[2020-06-19T07:06:30,899][INFO ][o.e.c.m.MetaDataDeleteIndexService] [es-master-001] [filebeat-6.8.6-2020.04.20/IHIZKPT6SRCJDQeJVC1jzg] deleting index [2020-06-19T07:06:38,645][INFO ][o.e.c.m.MetaDataCreateIndexService] [es-master-001] [filebeat-6.8.6-2020.04.20] creating index, cause [auto(bulk api)], templates [filebeat-6.8.6], shards /, mappings [doc] [2020-06-19T07:06:39,081][INFO ][o.e.c.r.a.AllocationService] [es-master-001] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[filebeat-6.8.6-2020.04.20], [filebeat-6.8.6-2020.04.20]] ...]). [2020-06-19T07:06:39,128][INFO ][o.e.c.m.MetaDataMappingService] [es-master-001] [filebeat-6.8.6-2020.04.20/bXCnP4_VRxqmZf4Lag0n1Q] update_mapping [doc]
As you can see, the index is being deleted and then reinstated. Even though the index is very old. Also, it is repopulated with the same data. Is this some kind of cluster or shard state issue? How do I gain insight and fix it?