Elaticsearch always have a node disconnected when initializing shards

Hi,

i have 4 nodes for ES in my cluster.
but there is always a node disconnected when the cluster initializing shard or when i add some configuration after rolling restart.

i have 64Gb RAM on each node.
is there any solution about this?

Best Regards,

This is not sufficient information to provide a useful answer.

Have you checked the logfile of the affected node. Also any special configuration should be included in your post, as well as the Elasticsearch version and in the best case steps to reproduce this issue.

Please read how to write an informative post, so that others can chime in and help.

--Alex

1 Like

hi,

thx for your reply.

i'm using elasticsearch 2.3.3 and here is some logs that i've found :

[2017-09-11 13:08:14,289][INFO ][discovery.zen            ] [Orka] master_left [{Iron Cross}{6k1R1wJVQZ67D2F0wdd-OA}{10.1.80.226}{10.1.80.226:9300}], reason [transport disconnected]
[2017-09-11 13:08:14,301][WARN ][discovery.zen            ] [Orka] master left (reason = transport disconnected), current nodes: {{Unuscione}{kGsXwmT_R6CJ_tyLfBxM7Q}{10.1.80.220}{10.1.80.220:9300},{Orka}{zWlA3gmbTI6Ug1hgBhecnA}{10.1.80.221}{10.1.80.221:9300},{Doctor Leery}{Y_0ZCdv2TqiDaAvRW9jtgg}{10.1.80.223}{10.1.80.223:9300},}
[2017-09-11 13:08:14,301][INFO ][cluster.service          ] [Orka] removed {{Iron Cross}{6k1R1wJVQZ67D2F0wdd-OA}{10.1.80.226}{10.1.80.226:9300},}, reason: zen-disco-master_failed ({Iron Cross}{6k1R1wJVQZ67D2F0wdd-OA}{10.1.80.226}{10.1.80.226:9300})

i have checked on my network but there is no problem about it. the datanode still up when i check it from ambari.
and this is my configuration :

http.port: 9200
transport.tcp.port: 9300
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: ["node1","node2","node3","node4"]
discovery.zen.ping_timeout: 30s

What do you mean with 'when I check it from ambari'? Do you see different 'world' from different hosts? Can you check the logfiles on all nodes, if they detect if the master node is gone?

Also, minimum master nodes should be 3 in this setup, not 1, if all nodes are master node eligible.

Sorry i mean when i check the Elasticsearch service status on all nodes, the service status is still running. i said from ambari because i install the elasticsearch on ambari.

could you explain me about this please?

Also, minimum master nodes should be 3 in this setup, not 1, if all nodes are master node eligible.

on my cluster. now i've set 2 master node and 4 datanode

i.e

node 1 : master + datanode
node 2 : master + datanode
node 3 : datanode
node 4 : datanode

is there any mistake from my architecture or my configuration?
pls advice,

You should always aim to have 3 master eligible nodes, with minimum_master_nodes set to 2 in order to avoid split-brain scenarios.

ok i'll try it.
many thanks before for your suggestion

Best Regards,

if you just check for status, then there is no guarantee that those nodes can connect to each other, this is the crucial part.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.