I have filebeats reading the log files on a remote server and shipping it to logstash on the same server. I have tried deleting the last 6 months old indices but I have noticed that indices are recreated on the elastic search.
If Logstash is recreating the indexes it's because something is sending them to Logstash.
It sounds like Filebeat is re-reading the files, something it normally won't do. When does this happen? Have you looked into the Filebeat registry file which records the file path, inode, and current position of input log files? Are the old log files untouched?
Well, this is hard to debug remotely. You'll have to keep digging. How do you know the old logs are untouched? Have you increased the Filebeat log level to get additional clues? What goes on in the registry file when it rereads the files?
I suggest you move this topic to the Filebeat category where the Filebeat experts hang out.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.