Elasticsearch failover not working with 2 nodes

I am setting up an Elasticsearch cluster with ES version 2.0.0. I have set up cluster with two nodes in EC2.

I have found following issues:

Auto discovery not working: With setup of two nodes having cluster.name same on both nodes. These nodes failed to discover each other. After changing config to described in this post I got the basic cluster working with master and slave.

Automatic failover: The cluster failed to elect slave as master when node 1 was stopped which made the cluster inoperable.

What may be the reason that the ES cluster not doing failover?

I see nothing extra in logs except

[discovery.zen] [stag-elastic-node-2] master left (reason = shut_down),.....
No logs related to election appears in log file of any node.

Config file node 1:

cluster.name: stag-elastic-cluster
node.name: stag-elastic-node-1
index.number_of_shards: 2
index.number_of_replicas: 1
network.host: 0.0.0.0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.x.x.x","10.x.x.x"]
Config file node 2:

cluster.name: stag-elastic-cluster
node.name: stag-elastic-node-2
index.number_of_shards: 2
index.number_of_replicas: 1
network.host: 0.0.0.0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.x.x.x","10.x.x.x"]
Ports 9200 and 9300 are open in both directions. Any help will be appreciated.

1 Like

I suspect some of the issues your are seeing might be due to trying to set up unicast discovery on EC2. You should look into using the cloud-aws plugin instead to do discovery in EC2. https://www.elastic.co/guide/en/elasticsearch/plugins/2.0/cloud-aws.html

Thanks, jpountz!

I tried to revert back to the default config in elasticsearch.yml like this:

Config file node 1:
cluster.name: stag-elastic-cluster
node.name: stag-elastic-node-1
index.number_of_shards: 2
index.number_of_replicas: 1
network.host: 0.0.0.0

Config file node 2:
cluster.name: stag-elastic-cluster
node.name: stag-elastic-node-2
index.number_of_shards: 2
index.number_of_replicas: 1
network.host: 0.0.0.0

and installed aws-cloud plugin. After restarting both of these nodes they should have auto discovered themselves to form a cluster, since the cluster name is same in both of this nodes. This seems to be not working in this case.

@Astro-es There is no cloud aws plugin settings in your elasticsearch.yml file?

cloud.aws.access_key: ACCESS_KEY_HERE

cloud.aws.secret_key: SECRET_KEY_HERE

cloud.aws.region: us-east-1

discovery.type: ec2

Hi Niraj,

These settings seems to be removed from elasticsearch.yml in version 2.0.0.
Also, these settings may be required for Snapshot and Restore. If I am not wrong.

Thanks,

@Astro-es

Seeing this link
https://www.elastic.co/guide/en/elasticsearch/plugins/master/discovery-ec2-discovery.html

I see that you have to set a tag to access it. But i am confused as to where the cloud-aws plugin will accept credentials from. Simply installing the plugin doesnt make sense as it doesn't know which credentials to use. Sorry i haven't used this in 2.0 but the above settings works for me the downgraded version.

--
Niraj

@niraj_kumar

I have noticed that the nodes have discovered each other after installing this plugin.
Note: No changes were made to config file elasticsearch.yml. regarding ec2 discovery.

Seems that it took some time for nodes to discover each other and form a cluster after the plugin was installed. Also, the failover is working as expected.

Thanks!

Is your EC2 system have the credentials been stored anywhere. I mean how can the plugin know where the credential is. It simply cannot go and join anyway.

I am glad it worked for you.