I am putting this under Logstash as I am reasonably sure (but not totally sure) I will need Logstash to achieve this.
We have around 170 servers all running the same application. On each of these servers there are about 5 log files we need to monitor. We need to trigger an alert if any **ONE **of these log files stops receiving data for a period of time (say 1 minute for example) as this indicates the application has failed.
I can get Filebeat running on them to monitor the files and send to Elasticsearch, but I can't get my head around what to do next (I'm a server and infrastructure guy not a developer :-))
I am thinking I will have to send logs to Logstash instead of ES, then use a pipeline to write each hostname/logfile combination to it's own index in ES and from there look at a watcher (or ML Job?) to alert if the number of lines in the file hasn't changed over an interval.