Cluster Docker "not enough master nodes discovered during pinging"

docker

(Matheus Costa) #1

I'm trying up 1 cluster of elastic (with 3 nodes) in docker on 3 hosts differents, I up 2 nodes and return the error when i access IP:9200/_cluster/health?pretty :
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}

and in logs apeears this error:

2018-11-08T20:08:33,355][WARN ][o.e.d.z.ZenDiscovery ] [es-node1] not enough master nodes discovered during pinging (found [[Candidate{node={es-node1}{PDAnZSBdQl2zoRdtrcRf5Q}{cWDrFb-0RTWeam1Ld_3yMw}{localhost}{127.0.0.1:9300}{ml.machine_memory=4294967296, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again

My elasticsearch.yml node1 (10.10.13.6)

cluster.name: "docker-cluster"
node.name: "es-node1"
node.master: true
node.data: true
network.host: 10.10.13.6 
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.10.13.6", "10.10.13.7", "10.10.13.8"]
network.publish_host: 10.10.13.6
discovery.zen.minimum_master_nodes: 2 

My elasticsearch.yml node2 (10.10.13.7)

cluster.name: "docker-cluster"
node.name: "es-node2"
node.master: true
node.data: true
network.host: 10.10.13.7 
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.10.13.6", "10.10.13.7", "10.10.13.8"]
network.publish_host: 10.10.13.6
discovery.zen.minimum_master_nodes: 2 

I need help, tks.


(Christian Dahlqvist) #2

Should network.publish_host not be 10.10.13.7 for node 2?


(Matheus Costa) #3

I did, but the error continues


(Christian Dahlqvist) #4

Why is there a reference to localhost in your log file?


(Matheus Costa) #5

I think it's because of my docker-compose.yml:

version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
container_name: elasticsearch
restart: always
volumes:
- es_data:/usr/share/elasticsearch/data
- es_bin:/usr/share/elasticsearch/bin
- es_config:/usr/share/elasticsearch/config
ports:
- 9200:9200
- 9300:9300
environment:
- http.host=0.0.0.0
- transport.host=localhost
# - network.host=10.10.13.6
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms2048m -Xmx3072m"
mem_limit: 4096m

kibana:
image: docker.elastic.co/kibana/kibana:6.4.2
restart: always
container_name: kibana
links:
- elasticsearch
environment:
- SERVER_NAME=Kibana
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- "5601:5601"
volumes:
- kibana_data:/usr/share/kibana
mem_limit: 2048m

volumes:
mongo_data:
driver: local
es_data:
driver: local
es_bin:
driver: local
es_config:
driver: local
graylog_data:
driver: local
kibana_data:
driver: local

i tried change localhost and 0.0.0.0 for IP but when made, the compose not up, the container started and after some seconds is stoped


(Christian Dahlqvist) #6

Well, transport.host can not be localhost as that is what Elasticsearch uses to communicate internally. Change this to 0.0.0.0 as well and see if that makes a difference.


(Matheus Costa) #7

return this error in logs:

2018-11-09T12:14:23,974][INFO ][o.e.t.TransportService   ] [es-node1] publish_address {172.18.0.3:9300}, bound_addresses {0.0.0.0:9300}
[2018-11-09T12:14:24,017][INFO ][o.e.b.BootstrapChecks    ] [es-node1] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2018-11-09T12:14:24,079][INFO ][o.e.n.Node               ] [es-node1] stopping ...
[2018-11-09T12:14:24,121][INFO ][o.e.n.Node               ] [es-node1] stopped
[2018-11-09T12:14:24,126][INFO ][o.e.n.Node               ] [es-node1] closing ...
[2018-11-09T12:14:24,197][INFO ][o.e.n.Node               ] [es-node1] closed
[2018-11-09T12:14:24,205][INFO ][o.e.x.m.j.p.NativeController] Native controller process has stopped - no new native processes can be started

(Matheus Costa) #8

Christian, do you know this error ?


(Christian Dahlqvist) #9

Yes, it is one of the bootstrap checks and is covered here.


(Matheus Costa) #10

Tks Christian the cluster is up


(system) #11

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.