We have a simple logstash 7.1.1 installation on a hardened Redhat 7.7
pipelines.yml
- pipeline.id: synonyms-ingest
pipeline.workers: 1
path.config: "/etc/logstash/CDE/synonym-logstash-csv-to-es.conf"
synonym-logstash-csv-to-es.conf
input
{
file
{
stat_interval => "10 seconds"
start_position => "beginning"
path => "/Data/Synonyms/synonyms.tsv"
sincedb_path => "/dev/null"
mode => "read"
file_completed_action => "delete"
}
}
filter { ....... }
output {
amazon_es {
manage_template => false
document_id => "%{[@metadata][generated_id]}"
hosts => ["${ES_HOST}"]
index => "synonym-english"
}
}
The first time logstash runs it ingests correctly into the endpoint.
If the synonyms.tsv is deleted and re-added (it's got a different inode value and a different last modified date) then logstash doesn't re-run the ingestion until logstash is restarted.
This problem is happening on a hardened AWS instance provided by the client. I don't know the extent of the hardening but feel that it has something to do with the issue. My next test will be to run it on a standard image and see if that resolves the issue.
Can anybody offer any suggestions about how to diagnose this?