I am shipping some log files using filebeat to elasticsearch through logstash. And in this chain there is possibility of any component going down. So inorder to monitor filebeat I want to ship filebeat.log also along with other log files to elasticsearch. But i cant use another filebeat to monitor the first filebeats log file as it will end up in a loop. So is there some way to ship logging also to logstash or elasticsearch, similar to the output section of filebeat?
But i cant use another filebeat to monitor the first filebeats log file as it will end up in a loop.
How so? Having a single Filebeat instance monitor itself can easily become a problem, but I don't see how a second instance would be problematic.
If i have second filebeat to monitor first one. I will end up in who will monitor the monitor problem. And end up in a loop of filebeats to monitor previous filebeat.
But instead if i can ship its own logs to elasticsearch, If ever filebeat is stopping it will log that its stopping and stop, so that i can see from elasticsearch that when and why it got stopped. And i can have elasticsearch probe based to alert me if ever filebeat goes down.
I will end up in who will monitor the monitor problem. And end up in a loop of filebeats to monitor previous filebeat.
That's a valid point, but at some point you're going to have to trust that a watcher is doing its job. Having Filebeat ship its own logs might not open up for that problem but rather brings other problems to the table.
But instead if i can ship its own logs to elasticsearch, If ever filebeat is stopping it will log that its stopping and stop, so that i can see from elasticsearch that when and why it got stopped.
What makes you think you can rely on Filebeat to log when it's about to shut down? There are a number of reasons why that wouldn't always take place.
What you care about, in the end, is Logstash processing events from each source (host and/or service). If those events stop flowing out from Logstash someone needs to look into why. I prefer hooking up Logstash to Lovebeat via a statsd output and configure Lovebeat to alert me if it hasn't seen an event from a host for a certain amount of time. If you run multiple instances of Lovebeat on different machines you can be reasonably sure that you're avoiding the who-watches-the-watcher problem.
This lovebeat seems to solve my issue. I will give it a try. Thanks for the quick response.
I tried to follow the process of lovebeat to alert, But still lovebeat alerts me only if there is no data flowing into logstash from filebeat of certain host. This could be possible not only when filebeat goes down, but also if there is no data for filebeat to push. If i could configure some heart beat kind of thing on filebeat that for every 10seconds or 1 minute, it will send a message to logstash saying, hey i am alive and working. Then i can hook up lovebeat and alert if ever filebeat is down. Does filebeat has such option to configure, rather than manually appending data for every 10 seconds to a file and making filebeat to monitor it?
It seems there is already a heartbeat feature planned for filbeat (All beats) here . Any idea on when this feature will be live?, Or is there an efficient alternative (work around) , i can implement until this feature is live?.
There is no fixed timeline yet on this feature. Best is to subscribe on the Github issue to get notified about updates.
This topic was automatically closed after 21 days. New replies are no longer allowed.