Logstash 5.4 recreating the indices after deletion

I have filebeats reading the log files on a remote server and shipping it to logstash on the same server. I have tried deleting the last 6 months old indices but I have noticed that indices are recreated on the elastic search.

Here is the output configuration:

output {
	elasticsearch {
		"hosts" => ["elasticsearch:80"]
		"index" => "iis_logs-%{+YYYY.MM}"
		"document_type" => "iislog"
	}	
}

If Logstash is recreating the indexes it's because something is sending them to Logstash.

It sounds like Filebeat is re-reading the files, something it normally won't do. When does this happen? Have you looked into the Filebeat registry file which records the file path, inode, and current position of input log files? Are the old log files untouched?

Yes old logs are untouched. But I still notice from the filebeats registry that filebeats is re-reading the old untouched files.

Well, this is hard to debug remotely. You'll have to keep digging. How do you know the old logs are untouched? Have you increased the Filebeat log level to get additional clues? What goes on in the registry file when it rereads the files?

I suggest you move this topic to the Filebeat category where the Filebeat experts hang out.

updating filebeat config with the correct registry path fixed the issue.
filebeat.registry_file: 'C:\ProgramData\Filebeat\registry'

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.