Run a Elasticsearch Cluster with Docker

I am having problems to create a cluster with Elasticsearch docker image .

I am deploying two different servers in my local network. There is no firewalls and the servers can communicate each others.

The command I am using is:

docker run --name elasticsearch-container \
    -e cluster.name=docker-cluster \
    -e network.publish_host=192.168.1.161 \
    -e bootstrap.memory_lock=true \
    -e node.name=main01 --publish-all \
    -p 9200:9200 \
    -d docker.elastic.co/elasticsearch/elasticsearch:5.5.0

And on another server

docker run --name elasticsearch-container \
    -e cluster.name=docker-cluster \
    -e network.publish_host=192.168.1.162 \
    -e bootstrap.memory_lock=true \
    -e discovery.zen.ping.unicast.hosts=192.168.1.161 \
    -e node.name=main02 --publish-all \
    -p 9200:9200 \
    -d docker.elastic.co/elasticsearch/elasticsearch:5.5.0

But the cluster does not work.

The message I am receiving is:

[o.e.c.s.ClusterService ] [main02] new_master {main02}{Pq1uX-fAS2mPnw5IQ5fSyQ}{GIBd58CBSlaAMIkHaNrKtQ}{192.168.1.181}{192.168.1.181:9300}{ml.enabled=true}, reason: zen-disco-elected-as-master ([0] nodes joined)

I mapped a local config to the docker image and the config is :

xpack.security.enabled: false
cluster.name: "docker-cluster"
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
bootstrap.memory_lock: true

What I am doing wrong?

Hello! Thanks for your interest in the Elasticsearch Docker images.

I think the main issue you are having here is the use of --publish-all. You should replace it with -p 9300:9300 for the transport port as --publish-all will randomly allocate an external port for your containers.

You can inspect the container ports with docker port elasticsearch-container.

I have prepared a Vagrantfile (uses VirtualBox) to use for replicating your scenario. This creates an elasticsearch.yml with your custom config params under /home/vagrant.

Following vagrant up && vagrant ssh m01, if I docker run using --publish-all:

vagrant@m01:~$ docker run --ulimit memlock=-1:-1 --name elasticsearch-container -v $PWD/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -e cluster.name=docker-cluster -e bootstrap.memory_lock=true -e node.name=main01 -e network.publish_host=192.168.124.101 --publish-all -p 9200:9200 -d docker.elastic.co/elasticsearch/elasticsearch:5.5.0
1051cc374b32f14d977c7eb8757b666f857b91b7ab68c37736a7b2f93e66aa7e

vagrant@m01:~$ docker port elasticsearch-container 
9200/tcp -> 0.0.0.0:9200
9300/tcp -> 0.0.0.0:32768

vagrant@m01:~$ nc -v localhost 9200
Connection to localhost 9200 port [tcp/*] succeeded!
^C
vagrant@m01:~$ nc -v localhost 9300
nc: connect to localhost port 9300 (tcp) failed: Connection refused
nc: connect to localhost port 9300 (tcp) failed: Connection refused

Whereas, I can successfully do the following:

vagrant destroy -f

vagrant up

vagrant ssh m01 -c 'docker run --ulimit memlock=-1:-1 --name elasticsearch-container -v $PWD/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -e cluster.name=docker-cluster -e bootstrap.memory_lock=true -e node.name=main01 -e network.publish_host=192.168.124.101 -p 9300:9300 -p 9200:9200 -d docker.elastic.co/elasticsearch/elasticsearch:5.5.0'

vagrant ssh m02 -c 'docker run --ulimit memlock=-1:-1 --name elasticsearch-container -v $PWD/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -e network.publish_host=192.168.124.102 -e cluster.name=docker-cluster -e bootstrap.memory_lock=true -e node.name=main02 -e discovery.zen.ping.unicast.hosts=192.168.124.101 -p 9200:9200 -p 9300:9300 -d docker.elastic.co/elasticsearch/elasticsearch:5.5.0'

$ vagrant ssh m02 -c 'curl -s localhost:9200/_cluster/health | jq .'
{
  "cluster_name": "docker-cluster",
  "status": "green",
  "timed_out": false,
  "number_of_nodes": 2,
  "number_of_data_nodes": 2,
  "active_primary_shards": 2,
  "active_shards": 4,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 0,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 100
}
Connection to 127.0.0.1 closed.

Hey Dimitrios,

You saved my life :slight_smile:

It worked!!!!! I just tested right now and it is OK.

Thank you!

1 Like

Totally my pleasure Joao!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.