Elastic Search restart

Hello Folks,

I have 3 elastic(5.1.2) nodes (01,02,03), of which 01,02 are in Datacenter1 and 03 is in Datacenter2. the elasticsearch.yml on all 3 has the below settings

  1. Cluster name is same across all - it is cluster01
  2. host and node names specific to the Linux server , like node01 ,node02 and node03.
  3. discovery.zen.ping.unicast.hosts: ["node01", "node02" , "node03"]
  4. We have kibana pointing to node01.

I have few questions here

  1. In the above scenario is there a Master ? and slave ?
  2. what is the best way to stop/start ES ? 1 node at a time , we tried to stop/start 3,2,1 , the nodes threw errors like no master etc
    path: /_bulk, params: {}org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/2/no master];
  3. is it recommended to switch to master/slave mode ? Currently this set up of ours is running fine,wanted to make sure the stop/start of nodes to minimize errors.

Any feedback/recommendations will be helpful

  1. Check _cat/master
  2. What settings have you applied to the cluster?
  3. There is no such thing as a slave, only master and master-eligible.

I checked the _cat/master , on all 3 nodes , it returns node01. So that means node01 is master and other 2 master-eligible ?

Yep :slight_smile:

That helps , thanks a lot

so for any maintenance or anything we need to start/stop node1 first before proceeding to others. is that a correct statement in my scenario

No, you should be able to restart any single node in that cluster and not lose access to the cluster.

What settings have you set in your config?

I have below settings and was wondering why did that error message came like no master ( meaning it was refering to my node01) and no data was being lost ?

cluster.name: cluster01
node.name: node01
path.data: /opt/data/elasticsearch-5.1.2
path.logs: /opt/logs/elasticsearch-5.1.2
bootstrap.memory_lock: true
network.host: node01
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node01", "node02" , "node03"]
discovery.zen.minimum_master_nodes: 3
http.cors.enabled: true
http.cors.allow-origin: "*"

That is why. Change that to 2.

After going through the documentation , it says below.

This setting must be set to a quorum of your master eligible nodes. It is recommended to avoid having only two master eligible nodes, since a quorum of two is two. Therefore, a loss of either master eligible node will result in an inoperable cluster

Any thoughts ? And also how did you come up with 2 ?

The docs also say (number master eligible nodes / 2) + 1.
So (3 / 2) + 1 = 2, when you round up.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.