Hi,
I'm trying to Deploy Elasticsearch on a 3 node Swarm cluster.
with Elasticsearch -3 replicas.
I'm using the following docker compose file for it:
elastic_compose.yml
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
hostname: "{{.Node.Hostname}}"
environment:
- node.name={{.Node.Hostname}}
- cluster.name=elk
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- network.host=0.0.0.0
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes='host1,host2'
- node.ml=false
- xpack.ml.enabled=false
- xpack.monitoring.enabled=false
- xpack.security.enabled=false
- xpack.watcher.enabled=false
- bootstrap.memory_lock=false
volumes:
- edata:/usr/share/elasticsearch/data
deploy:
mode: global
endpoint_mode: dnsrr
volumes:
edata:
driver: local
I could see all replica's running.. but, from my container logs I see the following WARN:
{"type": "server", "timestamp": "2020-05-13T08:28:35,380Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elk", "node.name": "hostname1", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes ['hostname1, hostname2, hostname3'] to bootstrap a cluster: have discovered [{hostname1}{OG54ixTKTu2UawYRbdF0rQ}{i58wyiHkSbiawLQsRY32Tw}{10.0.24.3}{10.0.24.3:9300}{dim}{xpack.installed=true}]; discovery will continue using from hosts providers and [{hostname1}{OG54ixTKTu2UawYRbdF0rQ}{i58wyiHkSbiawLQsRY32Tw}{10.0.24.3}{10.0.24.3:9300}{dim}{xpack.installed=true}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }
and when I check for the cluster status it says:
curl: (7) Failed connect to xxx.xxx.x.x:9200; Connection refused
but, I can see docker service ls result as all 3 ES running:
ID NAME MODE REPLICAS IMAGE PORTS
ptve6uuehj19 nee_elasticsearch global 3/3 docker.elastic.co/elasticsearch/elasticsearch:7.6.2
Could anyone of you please help me in understanding this or any possible solutions if you have.
Thanks in Advance.