Master Nodes not communicating properly in AWS

I am using Hashicorp Nomad in AWS with Docker images based on the official ones(

The security groups allow communication properly, as I can verify with netcat inside the container once deployed.

I'm getting a lot of NodeNotConnectedExceptions, and I'm not sure what I can do about it. Once in about every 10 deploys it runs perfectly, until I redeploy, and it fails with a similar log again.

If I don't get NodeNotConnectedExceptions, I get CoordinationStateRejectedExceptions or instances seemingly fighting over the role of master: (note I tried this with 7.0.0, but experienced it with 7.2.0 as well)

My Dockerfile:

COPY config/elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
ENV ES_JAVA_OPTS "-Xmx2g -Xms2g"
RUN echo "vm.max_map_count = 262144" > /etc/sysctl.conf

My configuration:

  host: ""
  bind_host: ""

bootstrap.memory_lock: true

  name: "clevyr-elk-cluster"
    - elasticsearch-master-0
    - elasticsearch-master-1
    - elasticsearch-master-2

  seed_providers: settings

  max_local_storage_nodes: 3 is coming from Nomad, where NOMAD_GROUP_NAME is elasticsearch-master and NOMAD_ALLOC_INDEX is from 0 to 2. network.publish_host is set to the private IP of the EC2 instances:${NOMAD_GROUP_NAME}-${NOMAD_ALLOC_INDEX}

Log messages from a clean run with no data:

Note the complaints about not connected, but from this instance, inside the container I can run:

nc -v 9300

and it connects successfully

Your log, specifically the NodeNotConnectedExceptions, indicates that the nodes are able to connect to each other, but something outside of Elasticsearch is then breaking these connections after a few messages. Often this is caused by an overenthusiastic IDS.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.