Elastic Search with Docker

I ran the following docker compose script and I am expecting two nodes to be up, however this only one. There seems to be some obvious error , please suggest your inputs

Taken from the documentation https://www.elastic.co/guide/en/elasticsearch/reference/6.8/docker.html

version: '2.2'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.8.12
    container_name: elasticsearch
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - esnet
  elasticsearch2:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.8.12
    container_name: elasticsearch2
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "discovery.zen.ping.unicast.hosts=elasticsearch"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata2:/usr/share/elasticsearch/data
    networks:
      - esnet

volumes:
  esdata1:
    driver: local
  esdata2:
    driver: local

networks:
  esnet:

http://127.0.0.1:9200/_cat/health

1598033352 18:09:12 docker-cluster green 1 1 0 0 0 0 0 0 - 100.0%

docker-compose ps

Name                   Command               State                Ports              
------------------------------------------------------------------------------------------
elasticsearch    /usr/local/bin/docker-entr ...   Up      0.0.0.0:9200->9200/tcp, 9300/tcp
elasticsearch2   /usr/local/bin/docker-entr ...   Up      9200/tcp, 9300/tcp

Hmm, interesting. Did you wait long enough for them to find each other and form a cluster post boot-up? The docker-compose.yml you've pasted and the example one in the docs are exactly the same. When I run it, I get

1598034943 18:35:43 docker-cluster green 2 2 0 0 0 0 0 0 - 100.0%

i.e. 2 nodes of this exact version of Elasticsearch, exactly as it's supposed to work. I can't see an obvious reason why I can't reproduce it.

hmm , no luck.
when did you say "wait long enough" - how long do you mean ? 1 min ?

Yeah, 1 min should be enough, but depends on your hardware. Usually it takes 20-30 seconds. I guess if it's not joined after 3 minutes you really have a problem. Here are my logs from the same example you're trying:

elasticsearch     | [2020-08-21T18:34:18,866][INFO ][o.e.c.s.MasterService    ] [DsbxDD4] zen-disco-node-join[{pfFjLH4}{pfFjLH4BTMulc2Mi6ilmPA}{mvCbPYD3SY2y_43qNIGYYw}{172.18.0.3}{172.18.0.3:9300}{ml.machine_memory=4129218560, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}], reason: added {{pfFjLH4}{pfFjLH4BTMulc2Mi6ilmPA}{mvCbPYD3SY2y_43qNIGYYw}{172.18.0.3}{172.18.0.3:9300}{ml.machine_memory=4129218560, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}
elasticsearch2    | [2020-08-21T18:34:18,912][INFO ][o.e.c.s.ClusterApplierService] [pfFjLH4] detected_master {DsbxDD4}{DsbxDD4lQ9-V7YVZ7RQ9kg}{h-TMjWdyRjGqRn_M9VMUfw}{172.18.0.2}{172.18.0.2:9300}{ml.machine_memory=4129218560, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}, added {{DsbxDD4}{DsbxDD4lQ9-V7YVZ7RQ9kg}{h-TMjWdyRjGqRn_M9VMUfw}{172.18.0.2}{172.18.0.2:9300}{ml.machine_memory=4129218560, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}, reason: apply cluster state (from master [master {DsbxDD4}{DsbxDD4lQ9-V7YVZ7RQ9kg}{h-TMjWdyRjGqRn_M9VMUfw}{172.18.0.2}{172.18.0.2:9300}{ml.machine_memory=4129218560, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} committed version [12]])
elasticsearch2    | [2020-08-21T18:34:18,971][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [pfFjLH4] Failed to clear cache for realms [[]]
elasticsearch2    | [2020-08-21T18:34:18,975][INFO ][o.e.x.s.a.TokenService   ] [pfFjLH4] refresh keys
elasticsearch2    | [2020-08-21T18:34:19,109][INFO ][o.e.x.s.a.TokenService   ] [pfFjLH4] refreshed keys
elasticsearch     | [2020-08-21T18:34:19,130][INFO ][o.e.c.s.ClusterApplierService] [DsbxDD4] added {{pfFjLH4}{pfFjLH4BTMulc2Mi6ilmPA}{mvCbPYD3SY2y_43qNIGYYw}{172.18.0.3}{172.18.0.3:9300}{ml.machine_memory=4129218560, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}, reason: apply cluster state (from master [master {DsbxDD4}{DsbxDD4lQ9-V7YVZ7RQ9kg}{h-TMjWdyRjGqRn_M9VMUfw}{172.18.0.2}{172.18.0.2:9300}{ml.machine_memory=4129218560, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [12] source [zen-disco-node-join[{pfFjLH4}{pfFjLH4BTMulc2Mi6ilmPA}{mvCbPYD3SY2y_43qNIGYYw}{172.18.0.3}{172.18.0.3:9300}{ml.machine_memory=4129218560, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}]]])
elasticsearch     | [2020-08-21T18:34:19,141][WARN ][o.e.d.z.ElectMasterService] [DsbxDD4] value for setting "discovery.zen.minimum_master_nodes" is too low. This can result in data loss! Please set it to at least a quorum of master-eligible nodes (current value: [-1], total number of master-eligible nodes used for publishing in this round: [2])
elasticsearch2    | [2020-08-21T18:34:19,160][INFO ][o.e.h.n.Netty4HttpServerTransport] [pfFjLH4] publish_address {172.18.0.3:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch2    | [2020-08-21T18:34:19,161][INFO ][o.e.n.Node               ] [pfFjLH4] started
elasticsearch2    | [2020-08-21T18:34:19,267][INFO ][o.e.l.LicenseService     ] [pfFjLH4] license [29417248-2d03-4cbd-a49e-4201736499da] mode [basic] - valid
elasticsearch2    | [2020-08-21T18:34:19,274][INFO ][o.e.x.m.e.l.LocalExporter] [pfFjLH4] waiting for elected master node [{DsbxDD4}{DsbxDD4lQ9-V7YVZ7RQ9kg}{h-TMjWdyRjGqRn_M9VMUfw}{172.18.0.2}{172.18.0.2:9300}{ml.machine_memory=4129218560, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}] to setup local exporter [default_local] (does it have x-pack installed?)
elasticsearch     | [2020-08-21T18:34:19,321][INFO ][o.e.l.LicenseService     ] [DsbxDD4] license [29417248-2d03-4cbd-a49e-4201736499da] mode [basic] - valid
elasticsearch2    | [2020-08-21T18:34:19,376][INFO ][o.e.x.m.e.l.LocalExporter] [pfFjLH4] waiting for elected master node [{DsbxDD4}{DsbxDD4lQ9-V7YVZ7RQ9kg}{h-TMjWdyRjGqRn_M9VMUfw}{172.18.0.2}{172.18.0.2:9300}{ml.machine_memory=4129218560, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}] to setup local exporter [default_local] (does it have x-pack installed?)

See especially zen-disco-node-join and waiting for elected master messages.

Try doing the docker-compose up startup and paste your entire log here, let's see what's going wrong. If it won't fit in a forum post, use https://gist.github.com/ .

Here you go

I also happened to add one more service Elasticsearch service and kibana. Hope it should not be a problem

What operating system and docker versions are you running this under? I'm curious since I wonder if something is interfering with the esnet docker network and the nodes are somehow not seeing each other.

ubuntu
i tool feel the same, it seems both are NOT aware of each other and creating their own clusters?

hmm, this confirms that - both are trying to compete each other to become master nodes ?

> elasticsearch2 |e[0m [2020-08-21T19:28:22,088][INFO ][o.e.c.s.MasterService ] [MJxcYj2] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {MJxcYj2}{MJxcYj2aQB2sVhBGDGQ7Pg}{Ejr7_xXSSa2LC-DHa5vHdg}{172.25.0.4}{172.25.0.4:9300}{ml.machine_memory=6115733504, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} > e[33melasticsearch2 |e[0m [2020-08-21T19:28:22,157][INFO ][o.e.c.s.ClusterApplierService] [MJxcYj2] new_master {MJxcYj2}{MJxcYj2aQB2sVhBGDGQ7Pg}{Ejr7_xXSSa2LC-DHa5vHdg}{172.25.0.4}{172.25.0.4:9300}{ml.machine_memory=6115733504, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {MJxcYj2}{MJxcYj2aQB2sVhBGDGQ7Pg}{Ejr7_xXSSa2LC-DHa5vHdg}{172.25.0.4}{172.25.0.4:9300}{ml.machine_memory=6115733504, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]]) > elasticsearch |e[0m [2020-08-21T19:28:22,296][INFO ][o.e.c.s.MasterService ] [xwrHIvk] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {xwrHIvk}{xwrHIvkxRQmSnETYdQsWUQ}{Ull3Yz5HTOWwGeCOnOBApA}{172.25.0.3}{172.25.0.3:9300}{ml.machine_memory=6115733504, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} > e[36melasticsearch |e[0m [2020-08-21T19:28:22,367][INFO ][o.e.c.s.ClusterApplierService] [xwrHIvk] new_master {xwrHIvk}{xwrHIvkxRQmSnETYdQsWUQ}{Ull3Yz5HTOWwGeCOnOBApA}{172.25.0.3}{172.25.0.3:9300}{ml.machine_memory=6115733504, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {xwrHIvk}{xwrHIvkxRQmSnETYdQsWUQ}{Ull3Yz5HTOWwGeCOnOBApA}{172.25.0.3}{172.25.0.3:9300}{ml.machine_memory=6115733504, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])

Ok, I found out the issue. It seems both the containers are somehow not aware of each other and trying to create the clusters on their own.
I introduced delay with "depends", it works fine now. Thanks

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.