Problem
Filebeat is repeatedly re-ingesting old logs (4-15 days old) causing previously deleted indices to be recreated.
Environment
Filebeat version: 7.10.1
ADeployment: ECK (Elastic Cloud on Kubernetes) DaemonSet
Platform: Google Kubernetes Engine (GKE)
Number of pods: 42 DaemonSet pods
Elasticsearch version: 7.10.1s happens periodically, and we need to configure
What's Happening
Idelete old filebeat indices (e.g., filebeat-7.10.1-2025.12.10 through date 2025.12.16)
Within hours, these indices are recreated with old log data
The recreated indices contain logs with:
@timestamp : Dec 12-16, 2025 (original log timestamp)
event.ingested : Jan 1, 2026 (today - when re-indexed)
How to Solve this ?
Hello @Marwan_Ghonem
Welcome to the Community!!
Could you please share the filebeat.yml in order to understand why the older indices are created again?
It can be related to the registry path :
Similar older issue :
I have filebeats reading the log files on a remote server and shipping it to logstash on the same server. I have tried deleting the last 6 months old indices but I have noticed that indices are recreated on the elastic search. Can you correct me where i'm doing wrong I went with the default configuration. Thanks
paths:
#- /var/log/*.log
- \\webserver10\iislogs\*.log
output.logstash:
# Boolean flag to enable or disable the output module.
#enabled: true
# The Logstash hosts
ho…
Thanks!!
stephenb
(Stephen Brown)
January 2, 2026, 3:27pm
3
Hi @Marwan_Ghonem
In addition 7.10.is 5+ years old and that you should upgrade with a matter of urgency.
Many improvements have been made on filebeat, including many improvements directly targeted at K8s container logs.