How to configure logstash to send data to Elasticsearch cluster in 6.2

Hello,

I have setup elasticsearch cluster with two node. I need to use this as high availability cluster. Currently logstash is sending data to master node and it gets replicate to slave node.

However if Master node goes does data is not sent to other node. Could you please help in this.

Elasticsearch is a clustered system, not a master-slave architecture. If you only have two nodes in your Elasticsearch cluster, you can therefore not have highly available setup. If only one of the nodes is master eligible, losing this will cause the cluster to stop accepting writes. The same is the case if both nodes are master eligible as a single node does not form a majority and therefore can not elect a master.

In order to have a highly available cluster that can tolerate one node going down, you need a minimum of 3 master-eligible nodes.

Hi Christian_Dahlqvist,

Thanks for reply. Could you please help me or point me to steps for setup configuration to tolerate one node going down.

Also which hosts need to be configured in logstash configuration. I tried with setting "cluster" parameter however logstash giving config error.

I would recommend that you set up three nodes with the default master/data configuration (is master-eligible and holds data). You will need to set minimum_master_nodes to 2 and make sure they can find each other by adding all nodes to the discovery.zen.ping.unicast.hosts list.

The Logstash elasticsearch output should then have all these nodes listed in order top be able to balance load and fail over if needed.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.