Logstash file-input doesn't pick up new files

We have a simple logstash 7.1.1 installation on a hardened Redhat 7.7

pipelines.yml

- pipeline.id: synonyms-ingest
  pipeline.workers: 1
  path.config: "/etc/logstash/CDE/synonym-logstash-csv-to-es.conf"

synonym-logstash-csv-to-es.conf

input 
{
	file 
	{
		stat_interval => "10 seconds"
		start_position => "beginning"
		path => "/Data/Synonyms/synonyms.tsv"
		sincedb_path => "/dev/null"
		mode => "read"
		file_completed_action => "delete"
	}
}
filter { ....... }
output {
	amazon_es {
		manage_template => false
		document_id => "%{[@metadata][generated_id]}"
		hosts => ["${ES_HOST}"]
		index => "synonym-english"
	}
}

The first time logstash runs it ingests correctly into the endpoint.

If the synonyms.tsv is deleted and re-added (it's got a different inode value and a different last modified date) then logstash doesn't re-run the ingestion until logstash is restarted.

This problem is happening on a hardened AWS instance provided by the client. I don't know the extent of the hardening but feel that it has something to do with the issue. My next test will be to run it on a standard image and see if that resolves the issue.

Can anybody offer any suggestions about how to diagnose this?

If you set log level to TRACE then filewatch should log enough for you do see if it thinks it has seen the file before. It it reads the file again after a restart then with sincedb_path set to /dev/null that suggests to me that that is the problem.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.