We try to run an Elasticsearch cluster with multiple nodes on the same VM for testing purpose and for this we are using the following docker compose file:
version: "2.2"
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
environment:
- node.name=es01
- cluster.name=docker-cluster
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es02,es03
- bootstrap.memory_lock=true
mem_limit: 1073741824
ulimits:
memlock:
soft: -1
hard: -1
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
volumes:
- esdata02:/usr/share/elasticsearch/data
environment:
- node.name=es02
- cluster.name=docker-cluster
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es03
- bootstrap.memory_lock=true
mem_limit: 1073741824
ulimits:
memlock:
soft: -1
hard: -1
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
volumes:
- esdata03:/usr/share/elasticsearch/data
environment:
- node.name=es03
- cluster.name=docker-cluster
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es02
- bootstrap.memory_lock=true
mem_limit: 1073741824
ulimits:
memlock:
soft: -1
hard: -1
kibana:
image: docker.elastic.co/kibana/kibana:7.13.2
volumes:
- kibanadata:/usr/share/kibana/data
ports:
- 5601:5601
environment:
SERVERNAME: kibana
ELASTICSEARCH_HOSTS: '["https://es01:9200","https://es02:9200","https://es03:9200"]'
mem_limit: 1073741824
volumes:
esdata01:
driver: local
esdata02:
driver: local
esdata03:
driver: local
kibanadata:
driver: local
When is started everything runs well.
We try to simulate scenarios in which the nodes are crushing, to see whether if the master node stops then one of the other nodes will take the master node role.
This scenario also works until the master node will be not the first node "es01", which when stops the other 2 nodes will not switch to master node.
So, it is not clear for us why if the master node is "es02" or "es03" and after it stops the master node role will be taken by one of the other 2 nodes, but it is not working if the master node is "es01" and after it stops the other 2 nodes will not take the master node role.