We are currently running an Elasticsearch & Kibana (8.14.3) cluster with:
-
3 master nodes and 6 data nodes
-
Also, we are using Metricbeat ver 8.14.3 for monitoring the cluster ( we are enabled xpack.monitoring feature )
This morning, I received a warning “failed to delete indices” from the current master..
I checked the log file (29-08) and found:
[2025-08-29T10:00:00,00][INFO] [o.e.x.m.e.l.LocalExporter] [elastic-current-master] cleaning up [2] old indices
[2025-08-29T10:00:00,00][INFO] [o.e.x.m.e.l.LocalExporter] [elastic-current-master] cleaning up [2] old indices
[2025-08-29T10:00:00,00][INFO] [o.e.x.m.MetadataDeleteIndexService] [elastic-current-master] [.monitoring-es-7-2025-08-22]/Cj&U39slfcmlla] deleting index
[2025-08-29T10:00:00,00][INFO] [o.e.x.m.MetadataDeleteIndexService] [elastic-current-master] [.monitoring-kibana-7-2025-08-22]/bWgDTYdfdskmgap] deleting index
[2025-08-29T10:00:00,055][INFO] [o.e.x.m.l.LocalExporter] [elastic-current-master] [.monitoring-kibana-7-2025-08-22]/bWgDTYdfdskmgap] deleting index
[2025-08-29T10:00:00,055][INFO] [o.e.x.m.l.LocalExporter] [elastic-current-master] [.monitoring-es-7-2025-08-22]/bWgDTYdfdskmgap] deleting index
org.elasticsearch.index.IndexNotFoundException: no such index [.monitoring-kibana-7-2025-08-22][2025-08-29T10:30:00,055][INFO] [o.e.x.s.SnapshotRetentionTask] [elastic-current-master] starting SLM retention snapshot cleanup task
Also, I checked other log files from 26-08 to 28-08, there’s no message like that..
The log files from 26-08 to 28-08 show the same messages, except they don’t include the message ‘no such index’
I checked ILM policy => No policy for it ( only for .monitoring-*-8..) since we are using ver 8
I checked cronjob ( use sudo as well ) => No cron job for it
I checked the index disk space => Still good
I assume that someone must have deleted it manually… I’m not sure
If anyone has better suggestions or experience handling this kind of issue, please let me know
Thanks in advance