Hello,
Do we have a way to configure failover for filebeat process?
Thanks in advance!
Surendra
Hello,
Do we have a way to configure failover for filebeat process?
Thanks in advance!
Surendra
Yes, like a secondary process to take over the processing incase primary fails.
Filebeat is deployed in all nodes, and in each one of them it keeps a registry with the logs read and succesfully sent, so no data should be lost if the process is restarted. In principle it shouldn't "fail" or stop unexpectedly (if it does it should be considered a bug, so please report it ). If you are using a service manager (like systemd in Linux) to start filebeat you can configure it to restart the process automatically if it stops unexpectedly.
You can also configure multiple hosts in the output, so if one fail another one can be used.
Filebeat 6.3 will have a new spooling feature (in beta) that will store events locally on disk if all the hosts in the output are down, so they can be sent when the hosts are back online.
I hope it helps to answer your question, if you have a more specific question or about an specific scenario please ask
Hi Jaime,
Thanks for explaining it to me! Can you let me know when 6.3 is generally available?
I cannot confirm a specific date, but we expect to release 6.3 really soon.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.