Configuring multiple nodes

We try to run an Elasticsearch cluster with multiple nodes on the same VM for testing purpose and for this we are using the following docker compose file:

version: "2.2"

services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
    volumes:
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    environment:
      - node.name=es01
      - cluster.name=docker-cluster
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es02,es03
      - bootstrap.memory_lock=true
    mem_limit: 1073741824
    ulimits:
      memlock:
        soft: -1
        hard: -1

  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
    volumes:
      - esdata02:/usr/share/elasticsearch/data
    environment:
      - node.name=es02
      - cluster.name=docker-cluster
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es03
      - bootstrap.memory_lock=true
    mem_limit: 1073741824
    ulimits:
      memlock:
        soft: -1
        hard: -1

  es03:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
    volumes:
      - esdata03:/usr/share/elasticsearch/data
    environment:
      - node.name=es03
      - cluster.name=docker-cluster
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es02
      - bootstrap.memory_lock=true
    mem_limit: 1073741824
    ulimits:
      memlock:
        soft: -1
        hard: -1

  kibana:
    image: docker.elastic.co/kibana/kibana:7.13.2
    volumes:
      - kibanadata:/usr/share/kibana/data
    ports:
      - 5601:5601
    environment:
      SERVERNAME: kibana
      ELASTICSEARCH_HOSTS: '["https://es01:9200","https://es02:9200","https://es03:9200"]'
    mem_limit: 1073741824

volumes:
  esdata01:
    driver: local
  esdata02:
    driver: local
  esdata03:
    driver: local
  kibanadata:
    driver: local

When is started everything runs well.
We try to simulate scenarios in which the nodes are crushing, to see whether if the master node stops then one of the other nodes will take the master node role.
This scenario also works until the master node will be not the first node "es01", which when stops the other 2 nodes will not switch to master node.

So, it is not clear for us why if the master node is "es02" or "es03" and after it stops the master node role will be taken by one of the other 2 nodes, but it is not working if the master node is "es01" and after it stops the other 2 nodes will not take the master node role.

We'd need to see the logs from the nodes to comment further.

I'm new here and I don't see the button related to attaching a file, I see just the possibility to add image type file.

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

This is the icon to use if you are not using markdown format:

There's a live preview panel for exactly this reasons.

I cannot attached the log file which is around ~100K, it is complaining:

Body is limited to 35000 characters; you entered 163382.

If some outputs are too big, please share them on gist.github.com and link them here.

Thank you.

So, the flow of events:

  • Elasticsearch cluster started
  • es02 became master node
  • es02 master node stopped
  • es03 became the new master node
  • es02 node restarted
  • es03 master node stopped
  • es01 became the new master node
  • es03 node restarted (these information were found using the Kibana API at http://localhost:5601)
  • es01 master node stopped
  • Kibana API is not working anymore, http://localhost:5601 is unreachable
  • es01 node restarted
  • es03 became the new master node
  • Kibana API is working now, http://localhost:5601 is reachable

So is strange that Kibana API is not working when es01 master node stops.

The log file can be found at https://github.com/albertszab/elastic/blob/main/docker-composer_logs.txt

Did you succeed to find something, any kind of problem, maybe related to configuration?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.