Detecting and Alerting on Log Loss from Logstash or beats

Hello,

I was trying to be able to detect/alert when either logstash or a beats product stops sendings logs.

The problem in the past we have had is when a specific logstash server goes down, or is up but not sending any logs, we were unaware until we attempted to lookup logs coming from that specific server and did not have any new events coming in.

Question: Is there any simple way to alert on a host when it stops sending logs to elastic after 24 hours or so?

If you have an X-Pack license for Watcher then I would recommend setting up a basic watch.

Otherwise, what about setting up an elasticsearch Logstash input that polls the index every 10 minutes and ensures there are records that match a range query like { range: { gte: 'now-10m', lte: 'now' } }? If there are no matches then use the email output plugin (or something similar) to send a notification.

Thanks for your response, yes I have x-pack and watchers. What would the alert be based on? Doc count in the last -10m?

That's one possibility, yes. It really depends on your environment and your expectations. For some 10 minutes would be appropriate, but in your question it sounds like you would even be fine with checking the count in a 24 hour window?

Keep in mind that for small window sizes the ingestion delay might look like a loss if the evaluation happens to eagerly. So depending on the performance characteristics of your system a window like [now - 15 minutes, now - 5 minutes] might also be appropriate to give the system 5 minutes to catch up.

Do you have a fixed list of hosts that you want to check for or is that list dynamic?