Hi,
We're trying to install/achieve logstash HA.
Is logstash support HA? or Is there any way to achieve the same?
Hi,
We're trying to install/achieve logstash HA.
Is logstash support HA? or Is there any way to achieve the same?
Logstash does not natively support HA, you need to use other tools to implement a HA Logstash deployment, like message queues Kafka, Virtual IPs, load balancers like HAProxy or NGINX, it depends on what do you need.
What is your use case?
You can use load balance in case of filebeats:
output.logstash:
hosts: ["node1:5044", "node2:5044", "node3:5044"]
loadbalance: true
more info: https://www.elastic.co/guide/en/beats/filebeat/current/load-balancing.html
Lots of good information here
Hi,
Right now!! we have single instance of logstash whose data is mounted on glusterfs. Now, we are trying to remove glusterfs, so we are looking a way for logstash data to be available during k8s node failure.
for ex: if logstash is enabled with persistent queue , we need to have a way for disk replication like glusterfs,ceph etc. if logstash is not enable with persistent queue i.e with in memory db, then if nodes fails then k8s will schedule logstash to other worker node, where logstash will come up but he will not have old data. so figuring out how can we achieve the scenario!!!.
thanks @stephenb ,
can you help me with below query?
we have single instance of logstash whose data is mounted on glusterfs. Now, we are trying to remove glusterfs, so we are looking a way for logstash data to be available during k8s node failure.
for ex: if logstash is enabled with persistent queue , we need to have a way for disk replication like glusterfs,ceph etc. if logstash is not enable with persistent queue i.e with in memory db, then if nodes fails then k8s will schedule logstash to other worker node, where logstash will come up but he will not have old data.
Note: we are using input plugin as kafka.
I would suggest you to use a Kafka cluster as a message queue, but it seems that you are already doing that.
WIth Kafka you can have multiple logstash as consumers, if one node fail, you can spin up another node and start consuming from where the other node stopped, you just need to configure the group_id in the input as the same in the nodes.
With the same group_id
you can also have multiple nodes running at the same time, or if you start to get lag in your topics, you can start other nodes temporarily to help empty the queue faster.
Node, you mean kubernetes worker node ?
Start other nodes temporarily, I didn't get it? can you please elaborate it.
if one node fail, you can spin up another node, what do you mean?
Logstash node, not Kubernetes, it doesn't matter where your Logstash is running, you just need to have the logstashs that consume from your Kafka with the same group_id
.
If one of your logstash fails, you can start a new one and it will start consuming from where the last one stopped.
If your queue in your Kafka is getting too big and start giving you lag, you can start a new logstash to help consume the queue, and then stop it later after things went back to normal.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.