How to set passwords for built-in users with docker-compose setup

Hey there,

I am running an Elasticsearch 7.3.0 cluster with three nodes (on three different machines) via a docker-compose setup. Here is my elasticsearch and kibana service defined in docker-compose.yml:

services:
  es-mdi:
    container_name: lxelk01-es-mdi
    image: elasticsearch:7.3.0
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - elk
    volumes:
      - es-mdi-volume:/usr/share/elasticsearch
    environment:
      cluster.name: my-cluster
      node.name: lxelk01-es-mdi
      network.host: 0.0.0.0
      network.publish_host: 192.168.2.120
      http.port: 9200
      transport.port: 9300
      bootstrap.memory_lock: "true"
      node.master: "true"
      node.data: "true"
      node.ingest: "true"
      node.ml: "false"
      xpack.ml.enabled: "false"
      discovery.seed_hosts: 192.168.2.120:9300,192.168.2.121:9300,192.168.2.122:9300
      cluster.initial_master_nodes: 192.168.2.120:9300,192.168.2.121:9300,192.168.2.122:9300
      xpack.monitoring.enabled: "true"
      xpack.monitoring.collection.enabled: "true"
      ES_JAVA_OPTS: "-Xms4g -Xmx4g"
      xpack.security.enabled: "true"
      #xpack.license.self_generated.type: "trial"
  ulimits:
    memlock: -1
    #noproc: 65536
    nofile: 65536
    fsize: -1
    as: -1
  restart: always

kibana:
  container_name: lxelk01-kibana
  image: kibana:7.3.0
  ports:
    - "5601:5601"
  networks:
    - elk
  volumes:
    - kibana-volume:/usr/share/kibana
  ulimits:
    memlock: -1
    #noproc: 65536
    nofile: 65536
    fsize: -1
    as: -1
  environment:
    SERVER_PORT: 5601
    SERVER_NAME: kibana.lxelk01.de
    ELASTICSEARCH_HOSTS: "http://192.168.2.120:9201/"
    XPACK_MONITORING_ENABLED: "true"
    XPACK_MONITORING_COLLECTION_ENABLED: "true"
    ELASTICSEARCH_USERNAME: "kibana"
    #ELASTICSEARCH_PASSWORD: ""
  restart: always

The cluster runs, meaning the nodes can find each other and they successfully elect a master. So far so good.

I followed the instructions on how to secure the elastic stack and right now I stuck at setting the passwords for the built-in users.

I start all three nodes and then go into one node's bash via docker exec -it ID bash and call

bin/elasticsearch-setup-passwords auto -u "http://192.168.2.120:9200"

This will print out the generated passwords on the terminal. At this point my kibana instance can logically not connect with the elasticsearch node because I haven't set the credentials of the kibana user. So I stop and remove the kibana service, edit the docker-compose file and set

ELASTICSEARCH_PASSWORD: "foo"

as environment argument (foo will be the generated password).

Then I bring kibana up again, it connects to the cluster and I get prompted for basic authentication when trying to access. There I log in as elastic user with the generated password and activate the trial license.

Now here is my issue:

Even though I successfully logged in as the elastic super user I can't acces the user management UI. There I want to create normal logins for the users.

Questions:

  1. Why can I not see the user management function even though I logged in as elastic user? (left highlighting)
  2. I expected that it would say the username "elastic" in the right highlighting. Am I really logged in?
  3. Is this workflow to set passwords for the built-in users in a docker environment good or would you suggest a different approach?

Thanks in advance!

3 Likes

Hi there,

Kibana doesn't use basic authentication, it uses a form for login. Your issue is that you haven't enabled security in kibana, but only in Elasticsearch and this is why you get this strange behavior. This basic auth login prompt you see is actually from Elasticsearch not Kibana (while Kibana makes requests on your behalf to Elasticsearch). Kibana is setup for anonymous access ( security implicitly is disabled ) and this is why you don't see an icon with your user on the far right as you would expect. You need to set

XPACK_SECURITY_ENABLED: "true"

in you kibana's environment too.

I'd do it slightly differently (hints for this are available in this part of the docs :

  • Add a ELASTIC_PASSWORD: $INITIAL_PASSWORD in your `environment section of the elasticsearch service part of your docker-compose.yml.
  • Set ELASTICSEARCH_PASSWORD in your kibana's environment section to the password you want to subsequently set for your kibana user.

For both of the above, you can either set the passwords in the docker compose file or read them from the environment (i.e. $INITIAL_PASSWORD in one of the supported ways ).

  • Once your nodes are up and the cluster is formed, use elastic user and the value of $INITIAL_PASSWORD to set the password of the kibana user to the value you have identified in docker compose , using the Change Password API
3 Likes

Hi @ikakavas,

thank you for your reply!

I added the environment variable XPACK_SECURITY_ENABLED: "true" to the kibana service and gave it the initial password kibanachangeme (for testing purposes I set it directly in the compose file).

I also set an initial password elastic for the elastic user itself in the compose files of the three nodes. Next up I started the nodes (not kibana). After the cluster has formed I changed the password of the kibana user via the Change Password API as you suggested like so:

curl -XPOST -H "Content-Type: application/json" http://192.168.2.120:9200/_security/user/kibana/_password -d "{ \"password\": \"kibanachangeme\" }"

After this request succeeded I start kibana. As I can see from the logs, kibana can connect to the cluster - great!

But when I access kibana via the browser it still prompts me for the basic auth from elasticsearch and does not show the usual kibana forms login. What's still missing?

Below are the updated services:

services:
  es-mdi:
    container_name: lxelk01-es-mdi
    image: elasticsearch:7.3.0
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - elk
    volumes:
      - es-mdi-volume:/usr/share/elasticsearch
    environment:
      cluster.name: my-cluster
      node.name: lxelk01-es-mdi
      network.host: 0.0.0.0
      network.publish_host: 192.168.2.120
      http.port: 9200
      transport.port: 9300
      bootstrap.memory_lock: "true"
      node.master: "true"
      node.data: "true"
      node.ingest: "true"
      node.ml: "false"
      xpack.ml.enabled: "false"
      discovery.seed_hosts: 192.168.2.120:9300,192.168.2.121:9300,192.168.2.122:9300
      cluster.initial_master_nodes: 192.168.2.120:9300,192.168.2.121:9300,192.168.2.122:9300
      xpack.monitoring.enabled: "true"
      xpack.monitoring.collection.enabled: "true"
      ES_JAVA_OPTS: "-Xms16g -Xmx16g"
      xpack.security.enabled: "true"
      ELASTIC_PASSWORD: "elastic"
      xpack.license.self_generated.type: "trial"
    ulimits:
      memlock: -1
      #noproc: 65536
      nofile: 65536
      fsize: -1
      as: -1
    restart: unless-stopped

  kibana:
    container_name: lxelk01-kibana
    image: kibana:7.3.0
    ports:
      - "5601:5601"
    networks:
      - elk
    volumes:
      - kibana-volume:/usr/share/kibana
    depends_on:
      - es-coord
    ulimits:
      memlock: -1
      #noproc: 65536
      nofile: 65536
      fsize: -1
      as: -1
    environment:
      SERVER_PORT: 5601
      SERVER_NAME: kibana.lxelk01.de
      ELASTICSEARCH_HOSTS: "http://192.168.2.120:9201/"
      XPACK_MONITORING_ENABLED: "true"
      XPACK_MONITORING_COLLECTION_ENABLED: "true"
      XPACK_SECURITY_ENABLED: "true"
      ELASTICSEARCH_USERNAME: "kibana"
      ELASTICSEARCH_PASSWORD: "kibanachangeme"
    restart: unless-stopped

I would appreciate it if you could take another look on my issue. Thanks in advance!

Hey @ikakavas,

I've fixed the problem myself! :slight_smile:

What I haven't mentioned is that on each three machines I have two elasticsearch nodes - an MDI node and a dedicated coordinating node. Kibana will connect to the coordinating node as specified in

ELASTICSEARCH_HOSTS: "http://192.168.2.120:9201/" # port 9201 is the coordinating node

I noticed that I forget to set

xpack.security.enabled: "true"

on that coordinating node as well.

So after I set this env variable to the service, everything worked fine and as expected! :slight_smile:

Nevertheless I still thank you to get me on the right track!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.