I'm currently using the default setting for filebeat, and also sends the monitoring data to an Elasticsearch cluster.
My workflow for logs is Filebeat -> Logstash -> Elasticsearch
Just wondering how can we monitor the internal queue of Filebeat? From my understanding when Filebeat doesn't receive ack from Logstash it will queue up the data.
But how to tell if the queue is full and how to get alerts from it?
The monitoring dashboard on Kibana doesn't seem to reflect the relevant data.
It is possible to gather metrics from both beats and logstash and send its metrics to Elasticsearch, you mention that Kibana did not show the metrics you are after, but is that with this enabled?
You can see a quick howto here on how to do it for both:
Yes I have all these enabled. But which metrics refer to the Filebeat queue status?
E.g.
For Filebeat my setting queue.mem.events = 4096
But from the metrics I don't see anything like x/4096
I don't see the number of events in the queue.
I simulated a situation when Logstash is down, and I can see metrics about Fail Rates.
But how can I know if the Filebeat's internal queue is full? In my production environment I have zero tolerance on losing events, so I need to know the status of Filebeat's queue and take action immediately
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.