ES (2 nodes) in Docker: Setup + connection

I've setup an ES cluster inside docker like described here:

version: '2'
services:
  elasticsearch1:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.5.2
    container_name: elasticsearch1
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 1g
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - esnet
  elasticsearch2:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.5.2
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "discovery.zen.ping.unicast.hosts=elasticsearch1"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 1g
    volumes:
      - esdata2:/usr/share/elasticsearch/data
    networks:
      - esnet

volumes:
`Preformatted text`  esdata1:
    driver: local
  esdata2:
    driver: local

networks:
  esnet:

This works fine but I still don't really understand the setup. For example. When I want to connect from my logstash to my ES. To which node do I have to connect? (elastichsearch1 and elastichsearch2 or only elasticsearch1?. (I make the connection internal in docker).

Only ES1 has exposed its ports so when I perform API commands it's always on ES node1. Is ES node2 taking all the info and is it some pure replication of ES1? And when ES1 goes down will ES2 take it over?
(Still connect to 1 but route immediatly to 2 or how is this working?)

I really need a full explanation about the 2 nodes of ES in docker and how they are working together and how I'm supposed to communicate with this cluster.

Thanks

Logstash can list multiple hosts, so put them both in and it'll load balance.

From a clustering point, the second node will become the master and take over managing the cluster. But you won't be able to interact with it because you have not exposed its HTTP port.

2 nodes is bad because there is no majority for a quorum. We'd suggest 3 nodes or more, but if this is non-prod then you may be able to get away with it.

Thanks for the answers. Okay I can add a second ES name in logstash and I can add a third node. But I'm still not sure how to work with the exposed ports. When I want to add a user I want to do that once, (so on one ES) and after that I hope it's created over the entire cluster. So I want to expose only 1 port of 1 of the ES-instances. Is this fair enough?
Keep 2 elastic's internal and 1 exposed (but is of course also internal so actually I have 3 internals to connect to from logstash). On that 1 I can perform my curl commands to create roles/users etc. (known also by the other 3 instances).

By default, anything you create on one node will be distributed to the cluster, ie the rest of the nodes.

Okay, so it's also enough to describe one of the elastic containers in my logstash config and it will be distributed in the cluster as long as this elastic node remains alive? (specifying more ES hosts in logstash makes it HA?) Thanks

Yes.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.