ELK clustering

We are going to do clustering of ELK nodes. So the idea is to run whole ELK on 1 node as well as on the second node too with same configuration of ELK. But we want to make a second node as a standby node ( file beat should send the data to the second node whenever the first node is unreachable to it). Is it possible to do so? Please provide me a related document.

Filebeat does not differntiate between cluster.

You sending to Logstash?

Why not rely on Elasticsearch clusters for HA and reliability?

Have you consider cross cluster replication ?

I don't want it to do differentiate between cluster. I want that if it can make a difference between 2 ELK nodes. So, if primary node is down then filebeat can learn that it has to send a data to the secondary node and I could be able to access that kibana URL which will run from secondary node.

yes I am sending it to logstash.

Filebeat does not differntiate between cluster.

I don't want it to do differentiate between cluster. I want that if it can make a difference between 2 ELK nodes.

One configures the Logstash endpoints/Nodes with Beats. But Beats assumes that all configured nodes/endpoints belong to the same cluster.
One can run Beats in failover mode (set output.logstash.loadbalance: false). In this case Beats will publish to one logstash Node only. But which node is chosen is totally at random.

Why do you need separate clusters each containing ES and logstash, and Kibana? Why not one ES cluster and a fleet of Logstash instances, that Beats will publish too?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.