Filebeats(5.40) keeps on recreating the indices after deletion

I have filebeats reading the log files on a remote server and shipping it to logstash on the same server. I have tried deleting the last 6 months old indices but I have noticed that indices are recreated on the elastic search. Can you correct me where i'm doing wrong I went with the default configuration. Thanks

  paths:
    #- /var/log/*.log
    - \\webserver10\iislogs\*.log

output.logstash:
  # Boolean flag to enable or disable the output module.
  #enabled: true

  # The Logstash hosts
  hosts: ["localhost:5044"]

  # Number of workers per Logstash host.
  #worker: 1

filebeat is only a log shipper. it wont handle index creations. check logstash conf if logstash output elasticsearch contains that index names.

Here is the output config for logstash.

output {
	elasticsearch {
		"hosts" => ["elasticsearch:80"]
		"index" => "iis_logs-%{+YYYY.MM}"
		"document_type" => "iislog"
	}	
}

seems filebeat still reading logs from the path sending to logstash. and logstash reading logs and pushing them to index iis_logs-* . make sure no filebeat is sending logs to logstash.

I notice from the filebeats registry that filebeats is re-reading the old untouched log files and shipping it to logstash. Is there any config setting which stop filebeats in re-reading the logs or stop shipping old log files to logstash?

ignore_older: 3h
This you can use to ignore older files. Moreover you can change your registry file if you are sure filebeat pushing logs from old registry with:

registry_file: 'C:\ProgramData\Filebeat\registry'  (if windows)
registry_file: '/var/registrydata/filebeat/registry' (if linux)

Sorry what do you want me to change in the registry file in windows?
Thanks for your response.

Ya try changing registry file path with your respective OS. and use ignore_older as well.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.