Could use some guidance on creating a proper setup with compose

I have been following the instructions at: Install Kibana with Docker | Kibana Guide [8.5] | Elastic
I was able to get it all up and running according to these instructions without using compose. However upgrading then would be a manual process of copying data around. So it looks like I should be using volumes for data and config, such that in the future I should just have to update the versions in compose and be good to go.

The document above has a section on persisting the kibana keystore, which looks like it comes from this post that lists the same thing for both kibana and es: Persist Elasticsearch/Kibana Keystores with Docker

However these don't say what to do with these volumes once they are created, or what credentials need to go in here. I have been searching and making some guesses without success. Here is my current docker-compose file:

version: '3.9'

services:
  elasticsearch:
    image: elasticsearch:8.5.3
    ports:
      - 9200:9200
    environment:
      discovery.type: 'single-node'
      xpack.security.enabled: 'true'
      xpack.security.enrollment.enabled: 'true'
      ELASTIC_PASSWORD: 'something'
    volumes:
      - esconfig:/usr/share/elasticsearch/config
      - esdata:/usr/share/elasticsearch/data
    networks:
      - br

  kibana:
    image: kibana:8.5.3
    volumes:
      - kibconfig:/usr/share/kibana/config
      - kibdata:/usr/share/kibana/data
    ports:
      - 5601:5601
    depends_on:
      - elasticsearch
    networks:
      - br

volumes:
  esconfig:
  esdata:
  kibconfig:
  kibdata:

networks:
  br:
    driver: bridge

I have tried allowing it to create these volumes itself. However it complains about ssl keystore not being setup or something and will not create an enrollment token. I've also tried manually copying the config and data folders out of a working copy into the volumes, but after removing the non-composed containers and recreating them with compose, ES won't even start.

I've been building my knowledge with docker, but I'm far from an expert, and don't really know much of anything about Elastic. So I'm sure I'm missing some overall understanding here that would help me put these pieces together.

Any suggestions would be much appreciated, thanks!

Hi @Matt_Pruett Welcome to the community.

I am not sure what you want to add to the keystore? Perhaps you want to add the the Kibana credentials or are you just "playing / learning"? To make this all work you are not required to add anything to the keystore. If there are specific Items you want to add to the keystore then you would use the command to do so and then they could be referenced by elasticsearch / kibana container respectively, and yes the mounts need to be correct which is often a docker issue not an elastic issue... some guidance below.

Have you looked at these docs in detail? Perhaps start with them they are the best place to start

Install Elasticsearch with Docker

And Specifically at the Start a multi-node cluster with Docker Compose which also includes kibana

Including a fully working compose file that sets up security certs etc... and enrolls Kibana

Is there something else you want to put in the keystore?

This works for sure I do it weekly... Update I just ran it all right now..works...

And there are Production Considerations

One of which is making sure

If you are bind-mounting a local directory or file, it must be readable by the elasticsearch user. In addition, this user must have write access to the config, data and log dirs (Elasticsearch needs write access to the config directory so that it can generate a keystore). A good strategy is to grant group access to gid 0 for the local directory.

And specific directions on the keystore if you want to use it

That link was very helpful thanks. I didn't think to look for the elasticsearch documentation, it's docker example is much better.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.