Elasticsearch, docker and failover

Hi there,

I'm struggling with my docker-compose configuration in order to test failover. I create 3 nodes (es01, es02 and es03) and I want to test a failover situation if I kill the master (es01).

Here's my docker-compose file :

version: '3.7'
services:

PREMIER NOEUD ELASTICSEARCH : ES01 --> FIRST MASTER

es01:
image: localhost:5000/elasticsearch740
container_name: es01
environment:
- node.name=es01
- cluster.initial_master_nodes=es01
- discovery.seed_hosts=es02,es03
- cluster.name=cluster-elk
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- network.bind_host=0.0.0.0
ports:
- 9200:9200

SECOND NOEUD ELASTICSEARCH : ES02 --> SLAVE ELIGIBLE MASTER

es02:
image: localhost:5000/elasticsearch740
container_name: es02
environment:
- node.name=es02
- cluster.initial_master_nodes=es01
- discovery.seed_hosts=es01,es03
- cluster.name=cluster-elk
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- network.bind_host=0.0.0.0
ports:
- 9201:9200

TROISIEME NOEUD ELASTICSEARCH : ES03 --> SLAVE ELIGIBLE MASTER

es03:
image: localhost:5000/elasticsearch740
container_name: es03
environment:
- node.name=es03
- cluster.initial_master_nodes=es01
- discovery.seed_hosts=es01,es02
- cluster.name=cluster-elk
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- network.bind_host=0.0.0.0
ports:
- 9202:9200

Here's the thing : if I display my containers and I want to stop the master (es01 in my example), this one is down but I can't reach http://localhost:9200 anymore. I have to choose another port (9201 or 9202). So I think I made a mistake in my docker-compose file...

Could you please tell me how to do?

Best regards!

Hi @vincent2mots

Since you bind port 9200 from the host to es01 on port 9200, you can't reach your container es01 because it is down. Everything is working correctly and as inteded. You bind port 9201 from the host to es02 on port 9200 and host port 9202 to es03 on port 9200. Both containers are running and accessible.

What do you want to, what is your goal?

Cheers.

My goal is very simple : I want to continue to use localhost:920 URL and ensure that if one of the nodes is down, my URL continue to be alive and the master has changed :smile

Hi @vincent2mots

This is not possible. You need a reverse proxy like apache or nginx which acts as a loadbalancer and checks, which containers are accessible.

Hope this helps.

Me again, what I would do is install nginx, allow access from outside on port 9200 and forward the traffic to the only from local accessible ports 9201, 9202 and 9203.

I hope this gives you an idea on how to do it.

What are you using to connect to Elasticsearch? The official clients / drivers, Beats, Logstash, Kibana,... all accept an array of hosts and will round robin between all available nodes. Having a load balancer shouldn't really be necessary (and might just create another single point of failure).

Thanks for your reply! I'm going to try this today, I will tell you :wink:

I'm going to connect a Kibana on top of Elasticsearch.

To ingest data, I have two ways :

  • With Logstash (for ingest files for instance)
  • With Python scripts (when I have to access API)

If I follow your ideas, I can make a 3 nodes cluster + 1 kibana into a Docker-Compose and, if I shut down the master, Kibana with still be able to connect to the new elected master without any Load Balancer?

Best regards,

  1. There is no the master, but just one of the 3 master-eligible nodes is the elected one at any point in time. There is no need to have Kibana use the current master node — any Elasticsearch node is generally fine. In larger clusters with dedicated master nodes that's even an anti-pattern to have any (non Elasticsearch) connections directly to those nodes.
  2. We added elasticsearch.hosts in 6.6. Since then you can list all three Elasticsearch instances (in your scenario) there and it should use any reachable instance. Configure it like that any try it out.

Thanks for all your explanations! That's perfect =)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.