Cannot start Elasticsearch when using repository-s3 plugin


I am using ES 5.5.1 running within Docker containers spread across 3 AWS EC2 instances. These are run based on the official Elastic Docker image from, with the repository-s3 plugin being installed as a step within my Dockerfile. However, when I try to start my nodes with this plugin installed, they seem to throw a variety of exceptions. If I start the nodes using the exact same config but without the repository-s3 plugin, then everything starts as expected.

The contents of one of the docker-compose files is as follows:

version: '2'
    container_name: elasticsearch1
    restart: always
      - node.master=true
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
      - "network.publish_host="
      - ",,"
      - "discovery.zen.ping_timeout=5m"
      - "discovery.zen.minimum_master_nodes=2"
      - "network.bind_host="
      - ""
      - ""
      - "xpack.watcher.enabled=false"
        soft: -1
        hard: -1
    mem_limit: 2g
      - esdata1:/usr/share/elasticsearch/data
      - 9200:9200
      - 9300:9300

    driver: local

And the contents of the Dockerfile is as follows:


USER elasticsearch
RUN elasticsearch-plugin install --batch repository-s3 && \
    bin/elasticsearch-keystore create && \
    echo ************ | bin/elasticsearch-keystore add -stdin && \
    echo ************ | bin/elasticsearch-keystore add -stdin

Hence the only difference between the 2 setups is that the s3-repository has been installed, along with the keys being added to the keystore.

The logs can be found here (wouldn't let me put them here due to character limit):

NB - The discovery is using the standard zen unicast discovery as I couldn't seem to get the EC2 discovery plugin working either.

(Ryan Ernst) #2

The settings you are putting in the keystore have the wrong name. They should be s3.client.default.access_key and s3.client.default.secret_key.


Even with these settings corrected, it still throws up exceptions when attempting to start the cluster?


Anyone got any ideas on this one?

(David Pilato) #5

it still throws up exceptions

Exactly the same exceptions?


Just tried again to double check on the exceptions. Appears that nothing can connect to the master node? Again, if I start this up without the s3-repository installed, everything works as expected.

The logs are here:

(Michael Basnight) #7

I am currently trying to run a 3 node scenario on a single host to validate what you are saying @ps_tom , but one thing i noticed immediately is that the publish_host setting you've set is incorrect. It should not have the port, and only be - "network.publish_host=172.31.32.XXX". Just for completeness, it is fine that the has host:port combinations, so you need not change those. Try it out w/ the proper publish_host and see what happens. Ill post a 3 node working config here too, soon.

(Michael Basnight) #8

There was indeed an issue in 5.5. It was caused by serialization when the node stats for a node with secure settings (keystore settings) had a bad value read for its size. The fix is here. It will be fixed in 5.6 when it comes out.

Another issue I see with the above snippet is the Dockerfile RUN line has a single dash for -stdin and it should be two.

(system) #9

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.