Elastic and Kibana 8.1.0 via Docker Compose

With 8.1.0, security is enabled by default and a SSL certificate is generated for localhost. I have this docker compose file.

elastic:
    image: elasticsearch:8.1.0
    ports:
      - 9200:9200
      - 9300:9300

    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.type=single-node

  kibana:
    image: kibana:8.1.0
    ports:
      - 5601:5601

    depends_on:
      - elastic

When I set up Kibana, I cannot use http://elastic nor https://elastic through manual configuration, http is not available and elastic is not in the certificate.

If I configure with the Elastic token, then it sets it to a IP address to an 172.x Docker address. That is not desired.

Is there a way to make it work with the elastic host without any manual tweaking post install?

thanks
Matt

Welcome!

Have a look at the provided examples. I think this could help: Install Elasticsearch with Docker | Elasticsearch Guide [8.1] | Elastic

However, if you are using curl, you will need to use --insecure option because of the self-signed certificates.

@dadoonet Thanks for the reference.

I was thinking there might be a less manual tweaking option this being Docker. There's quite a lot of code in the sample Docker compose file alone.

In previous versions, before security was enforced, the setup was much simpler. Hopefully this can get easier in future versions.

Well I guess it can be easier if you have generated your own certificates first.

But not an expert. And at least it's documented here :grin:

Have a closer look at the example docker-compose.yml. Everything is fully automated. The only thing you are probably missing is extracting a copy of /usr/share/elasticsearch/config/certs/ca.crt. If you export and use ca.crt, you should be able to run these commands from your host machine.

  • curl -s --cacert /tmp/ca.crt https://localhost:9200/ for es01
  • curl -s --cacert /tmp/ca.crt https://localhost:5601/ for kibana01

Overview:

Container startup: Creates volume /usr/share/elasticsearch/config/certs. Runs bin/elasticsearch-certutil to generate certs/ca.zip. Runs it again with a config of SANs to generate certs/certs.zip. Each server cert gets SANs like localhost, 127.0.0.1, and one of es01/es02/es03/kibana01. Cert files are stored in the certs volume, and get mounted by subsequent containers. You should be able to use Docker Compose volume commands to extract a copy of ca.crt to your host machine too.

Containers es01, es02, es03, kibana01: Mounts /usr/share/elasticsearch/config/certs. Starts Elasticsearch/Kibana with ca.crt and one of the server certs. Health checks do full trust checking and hostname verification, such as curl -s --cacert config/certs/ca/ca.crt https://localhost:9200. Server cert SANs were populated with localhost by the setup container.

Get a copy of /usr/share/elasticsearch/config/certs/ca.crt from the certs volume. Use it in your host machine. Use it in curl for trust checking. Docker Compose port mapping via localhost allows hostname checking to pass too.

If you are using a HTTPS browser, import ca.crt temporarily into your browser's truststore. If your browser complains about localhost certs (ex: Chrome), you may need to click through that warning, or configure your browser to allow localhost SANs.

Thanks for the explanation.

I was thinking it was more “out of the box”. Perhaps like passing an environment variable of hostnames and it does the rest. Maybe a shared named volume between them to distribute the certs. Helps for someone new to it like me.

Once you have a bit more experience, scripting it outside of docker-compose like what @dadoonet suggested is a good option.

Thanks