[Docker] Elastic Security: Kibana cannot reach Elasticsearch (Unable to revive connection)

Hi there!

I am facing an issue about how to enable Elastic Security everywhere. I've finally figured out how to setup everything between nodes, but Kibana seems to not reach the Elasticsearch node:

{"type":"log","@timestamp":"2020-04-26T19:32:04Z","tags":["warning","plugins","licensing"],"pid":6,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
{"type":"log","@timestamp":"2020-04-26T19:32:05Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: https://es01:9200/"}
{"type":"log","@timestamp":"2020-04-26T19:32:05Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}

However, when I do an nslookup inside the Kibana's container, everything is fine:

[root@24f6cc9a149f kibana]# nslookup es01
Server: 127.0.0.11
Address: 127.0.0.11#53

Non-authoritative answer:
Name: es01
Address: 172.28.0.3

There is my docker-compose.yml code for Certificate generation:

version: '2.2'
services:
          create_certs:
            container_name: create_certs
            image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
            command: >
              bash -c '
                if [[ ! -f /certs/bundle.zip ]]; then
                  bin/elasticsearch-certutil cert --silent --pem --in config/certificates/instances.yml -out /certs/bundle.zip;
                  unzip /certs/bundle.zip -d /certs;
                fi;
                chown -R 1000:0 /certs
              '
            user: "0"
            working_dir: /usr/share/elasticsearch
            volumes: ['certs:/certs', '.:/usr/share/elasticsearch/config/certificates']

volumes: {"certs"}

And the docker-compose.yml with Elasticsearch and Kibana:

    version: '3.7'
    services:
    kibana:
        image: docker.elastic.co/kibana/kibana:7.6.2
        container_name: kibana
        volumes:
          - IAkibanaData:/usr/share/IAkibana/config/kibana.yml
        environment:
          - ELASTICSEARCH_HOSTS=https://es01:9200
          - ELASTICSEARCH_USERNAME=elastic
          - ELASTICSEARCH_PASSWORD=$ELASTIC_PASSWORD
        ports:
          - 5601:5601
        depends_on:
          - es01
          - es02
          - es03
        networks:
          - net3
          - net2
      es01:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
        container_name: es01
        environment:
          - node.name=es01
          - cluster.name=es-docker-cluster
          - xpack.security.enabled=true
          - xpack.security.http.ssl.enabled=true
          - xpack.security.http.ssl.key=$CERTS_DIR/es01/es01.key
          - xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
          - xpack.security.http.ssl.certificate=$CERTS_DIR/es01/es01.crt
          - xpack.security.transport.ssl.enabled=true
          - xpack.security.transport.ssl.verification_mode=certificate
          - xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
          - xpack.security.transport.ssl.certificate=$CERTS_DIR/es01/es01.crt
          - xpack.security.transport.ssl.key=$CERTS_DIR/es01/es01.key
          - ELASTIC_USERNAME=elastic
          - ELASTICSEARCH_PASSWORD=$ELASTIC_PASSWORD
          - discovery.seed_hosts=es02,es03
          - cluster.initial_master_nodes=es01,es02,es03
          - bootstrap.memory_lock=true
          - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
          - LimitNOFILE=65536
          - LimitMEMLOCK=infinity
          - TimeoutStopSec=0
        healthcheck:
          test: curl --cacert $CERTS_DIR/ca/ca.crt -s https://localhost:9200 >/dev/null; if [[ $$? == 52 ]]; then echo 0; else echo 1; fi
          interval: 30s
          timeout: 10s
          retries: 5
        ulimits:
          memlock:
            soft: -1
            hard: -1
        volumes:
          - es01data:/usr/share/elasticsearch/data
          - certs:$CERTS_DIR
        networks:
          - net1
          - net2
      es02:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
        container_name: es02
        environment:
          - node.name=es02
          - cluster.name=es-docker-cluster
          - ELASTIC_USERNAME=elastic
          - ELASTICSEARCH_PASSWORD=$ELASTIC_PASSWORD
          - discovery.seed_hosts=es01,es03
          - cluster.initial_master_nodes=es01,es02,es03
          - bootstrap.memory_lock=true
          - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
          - xpack.security.enabled=true
          - xpack.security.http.ssl.enabled=true
          - xpack.security.http.ssl.key=$CERTS_DIR/es02/es02.key
          - xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
          - xpack.security.http.ssl.certificate=$CERTS_DIR/es02/es02.crt
          - xpack.security.transport.ssl.enabled=true
          - xpack.security.transport.ssl.verification_mode=certificate
          - xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
          - xpack.security.transport.ssl.certificate=$CERTS_DIR/es02/es02.crt
          - xpack.security.transport.ssl.key=$CERTS_DIR/es02/es02.key
        ulimits:
          memlock:
            soft: -1
            hard: -1
        volumes:
          - es02data:/usr/share/elasticsearch/data
          - certs:$CERTS_DIR
        networks:
          - net1
          - net2
    [...]

Links I've used to guide me:

Thank you in advance!

Can you please share the entire docker compose file and check your logs from es01 and es02 ? Also are these 3 lines the only thing you see in your kibana logs ?

It's either that the cluster doesn't start because of a configuration error or that the logs you are sharing from kibana are from early on before the elasticsearch cluster forms and is up and running

I've completely rebuild the infrastructure, and then I ran Kibana. There are logs I've had:

Kibana (emblematic ones):

{"type":"log","@timestamp":"2020-04-27T20:04:17Z","tags":["warning","config","deprecation"],"pid":6,"message":"Setting [elasticsearch.username] to \"elastic\" is deprecated. You should use the \"kibana\" user instead."}
{"type":"log","@timestamp":"2020-04-27T20:04:17Z","tags":["warning","config","deprecation"],"pid":6,"message":"Setting [xpack.monitoring.elasticsearch.username] to \"elastic\" is deprecated. You should use the     \"kibana\" user instead."}
{"type":"log","@timestamp":"2020-04-27T20:04:18Z","tags":["info","savedobjects-service"],"pid":6,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","@timestamp":"2020-04-27T20:04:18Z","tags":["error","elasticsearch","data"],"pid":6,"message":"Request error, retrying\nHEAD https://es01:9200/.apm-agent-configuration => unable to verify the first certificate"}
{"type":"log","@timestamp":"2020-04-27T20:04:18Z","tags":["error","elasticsearch","data"],"pid":6,"message":"Request error, retrying\nGET https://es01:9200/_xpack => unable to verify the first certificate"}
{"type":"log","@timestamp":"2020-04-27T20:04:18Z","tags":["error","elasticsearch","admin"],"pid":6,"message":"Request error, retrying\nGET https://es01:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => unable to verify the first certificate"}
Could not create APM Agent configuration: No Living connections
{"type":"log","@timestamp":"2020-04-27T20:04:18Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: https://es01:9200/"}
{"type":"log","@timestamp":"2020-04-27T20:04:18Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}

es01:

{"type": "server", "timestamp": "2020-04-27T20:00:07,536Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "es-docker-cluster", "node.name": "es01", "message": "master node changed {previous [], current [{es02}{JP9NOCAvTXmzGLb9ASOH2w}{ho3RW0scTCeIsNXuLDLPyQ}{172.27.0.2}{172.27.0.2:9300}{dilm}{ml.machine_memory=9257639936, ml.max_open_jobs=20, xpack.installed=true}]}, added {{es03}{YS3d0fIQSgaNH07oE5z3qw}{IKGHRGACRtiuO9GpD4p2zA}{172.27.0.3}{172.27.0.3:9300}{dilm}{ml.machine_memory=9257639936, ml.max_open_jobs=20, xpack.installed=true},{es02}{JP9NOCAvTXmzGLb9ASOH2w}{ho3RW0scTCeIsNXuLDLPyQ}{172.27.0.2}{172.27.0.2:9300}{dilm}{ml.machine_memory=9257639936, ml.max_open_jobs=20, xpack.installed=true}}, term: 7, version: 23, reason: ApplyCommitRequest{term=7, version=23, sourceNode={es02}{JP9NOCAvTXmzGLb9ASOH2w}{ho3RW0scTCeIsNXuLDLPyQ}{172.27.0.2}{172.27.0.2:9300}{dilm}{ml.machine_memory=9257639936, ml.max_open_jobs=20, xpack.installed=true}}" }
{"type": "server", "timestamp": "2020-04-27T20:00:07,674Z", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "es-docker-cluster", "node.name": "es01", "message": "publish_address {172.27.0.4:9200}, bound_addresses {0.0.0.0:9200}", "cluster.uuid": "ndhomn--QA-BgNRkh6eMHQ", "node.id": "CxlcARQWS5e9bptz53pCnw"  }
{"type": "server", "timestamp": "2020-04-27T20:00:07,674Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "es-docker-cluster", "node.name": "es01", "message": "started", "cluster.uuid": "ndhomn--QA-BgNRkh6eMHQ", "node.id": "CxlcARQWS5e9bptz53pCnw"  }

es02:

{"type": "server", "timestamp": "2020-04-27T20:00:06,798Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "es-docker-cluster", "node.name": "es02", "message": "master node changed {previous [], current [{es02}{JP9NOCAvTXmzGLb9ASOH2w}{ho3RW0scTCeIsNXuLDLPyQ}{172.27.0.2}{172.27.0.2:9300}{dilm}{ml.machine_memory=9257639936, xpack.installed=true, ml.max_open_jobs=20}]}, added {{es03}{YS3d0fIQSgaNH07oE5z3qw}{IKGHRGACRtiuO9GpD4p2zA}{172.27.0.3}{172.27.0.3:9300}{dilm}{ml.machine_memory=9257639936, ml.max_open_jobs=20, xpack.installed=true}}, term: 7, version: 22, reason: Publication{term=7, version=22}" }
{"type": "server", "timestamp": "2020-04-27T20:00:06,847Z", "level": "INFO", "component": "o.e.c.s.MasterService", "cluster.name": "es-docker-cluster", "node.name": "es02", "message": "node-join[{es01}{CxlcARQWS5e9bptz53pCnw}{8ZfSYRZLSkaq34jwL3yOFA}{172.27.0.4}{172.27.0.4:9300}{dilm}{ml.machine_memory=9257639936, ml.max_open_jobs=20, xpack.installed=true} join existing leader], term: 7, version: 23, delta: added {{es01}{CxlcARQWS5e9bptz53pCnw}{8ZfSYRZLSkaq34jwL3yOFA}{172.27.0.4}{172.27.0.4:9300}{dilm}{ml.machine_memory=9257639936, ml.max_open_jobs=20, xpack.installed=true}}", "cluster.uuid": "ndhomn--QA-BgNRkh6eMHQ", "node.id": "JP9NOCAvTXmzGLb9ASOH2w"  }
{"type": "server", "timestamp": "2020-04-27T20:00:06,913Z", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "es-docker-cluster", "node.name": "es02", "message": "publish_address {172.27.0.2:9200}, bound_addresses {0.0.0.0:9200}", "cluster.uuid": "ndhomn--QA-BgNRkh6eMHQ", "node.id": "JP9NOCAvTXmzGLb9ASOH2w"  }
{"type": "server", "timestamp": "2020-04-27T20:00:06,916Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "es-docker-cluster", "node.name": "es02", "message": "started", "cluster.uuid": "ndhomn--QA-BgNRkh6eMHQ", "node.id": "JP9NOCAvTXmzGLb9ASOH2w"  }
{"type": "server", "timestamp": "2020-04-27T20:00:07,555Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "es-docker-cluster", "node.name": "es02", "message": "added {{es01}{CxlcARQWS5e9bptz53pCnw}{8ZfSYRZLSkaq34jwL3yOFA}{172.27.0.4}{172.27.0.4:9300}{dilm}{ml.machine_memory=9257639936, ml.max_open_jobs=20, xpack.installed=true}}, term: 7, version: 23, reason: Publication{term=7, version=23}", "cluster.uuid": "ndhomn--QA-BgNRkh6eMHQ", "node.id": "JP9NOCAvTXmzGLb9ASOH2w"  }

And my complete docker-compose.yml file:

version: '3.7'
services:
  kibana:
    image: docker.elastic.co/kibana/kibana:7.6.2
    container_name: kibana
    volumes:
      - IAkibanaData:/usr/share/IAkibana/config/kibana.yml
    environment:
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=elastic
      - ELASTICSEARCH_PASSWORD=$ELASTIC_PASSWORD
    ports:
      - 5601:5601
    depends_on:
      - es01
      - es02
      - es03
    networks:
      - net3
      - net2
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
    container_name: es01
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=$CERTS_DIR/es01/es01.key
      - xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
      - xpack.security.http.ssl.certificate=$CERTS_DIR/es01/es01.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
      - xpack.security.transport.ssl.certificate=$CERTS_DIR/es01/es01.crt
      - xpack.security.transport.ssl.key=$CERTS_DIR/es01/es01.key
      - ELASTIC_USERNAME=elastic
      - ELASTICSEARCH_PASSWORD=$ELASTIC_PASSWORD
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - LimitNOFILE=65536
      - LimitMEMLOCK=infinity
      - TimeoutStopSec=0
    healthcheck:
      test: curl --cacert $CERTS_DIR/ca/ca.crt -s https://localhost:9200 >/dev/null; if [[ $$? == 52 ]]; then echo 0; else echo 1; fi
      interval: 30s
      timeout: 10s
      retries: 5
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - es01data:/usr/share/elasticsearch/data
      - certs:$CERTS_DIR
    networks:
      - net1
      - net2
  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
    container_name: es02
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - ELASTIC_USERNAME=elastic
      - ELASTICSEARCH_PASSWORD=$ELASTIC_PASSWORD
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=$CERTS_DIR/es02/es02.key
      - xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
      - xpack.security.http.ssl.certificate=$CERTS_DIR/es02/es02.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
      - xpack.security.transport.ssl.certificate=$CERTS_DIR/es02/es02.crt
      - xpack.security.transport.ssl.key=$CERTS_DIR/es02/es02.key
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - es02data:/usr/share/elasticsearch/data
      - certs:$CERTS_DIR
    networks:
      - net1
      - net2
  es03:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
    container_name: es03
    environment:
      - node.name=es03
      - cluster.name=es-docker-cluster
      - ELASTIC_USERNAME=elastic
      - ELASTICSEARCH_PASSWORD=$ELASTIC_PASSWORD
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=$CERTS_DIR/es03/es03.key
      - xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
      - xpack.security.http.ssl.certificate=$CERTS_DIR/es03/es03.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
      - xpack.security.transport.ssl.certificate=$CERTS_DIR/es03/es03.crt
      - xpack.security.transport.ssl.key=$CERTS_DIR/es03/es03.key
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - es03data:/usr/share/elasticsearch/data
      - certs:$CERTS_DIR
    networks:
      - net1
      - net2
volumes:
  certs:
    driver: local
  es01data:
    driver: local
  es02data:
    driver: local
  es03data:
    driver: local
  IAkibanaData:
    driver: local
networks:
  net1:
    driver: bridge
  net2:
    driver: bridge
  net3:
    driver: bridge

I've tried another way, and updated my create-certs.yml file:

version: '2.2'

services:
  create_certs:
    container_name: create_certs
    image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
    command: >
      bash -c '
        if [[ ! -f /certs/bundle.zip ]]; then
          bin/elasticsearch-certutil cert --silent --pem --in config/certificates/instances.yml -out /certs/bundle.zip;
          unzip /certs/bundle.zip -d /certs
        fi;
        yum install openssl -y
        if [[ ! -f /certs/identity.p12 ]]; then
          openssl req -x509 -newkey rsa:4096 -keyout /certs/cakey_kibana_browser.pem -out /certs/cacert_kibana_browser.pem -days 365 -nodes -subj "/C=GB/ST=London/L=London/O=MSRT/CN=www.msrt.com"
          openssl pkcs12 -export -in /certs/cacert_kibana_browser.pem -inkey /certs/cakey_kibana_browser.pem -out /certs/identity.p12 -name "kibana-es-key" -passout pass:;
        fi;
        chown -R 1000:0 /certs
        ls -la /certs
      '
    user: "0"
    working_dir: /usr/share/elasticsearch
    volumes: ['certs:/certs', '.:/usr/share/elasticsearch/config/certificates']

volumes: {"certs"}

And added some environment variable in docker-compose.yml, kibana section:

  • ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES #For SSL between Kibana and nodes
  • SERVER_SSL_KEYSTORE_PASSWORD # P12 certificate need the password, even if empty
  • SERVER_SSL_KEYSTORE_PATH #For HTTPS between Browser and Kibana

Kibana section:

  kibana:
    image: docker.elastic.co/kibana/kibana:7.6.2
    container_name: kibana
    volumes:
      - IAkibanaData:/usr/share/IAkibana/config/kibana.yml
      - certs:$CERTS_DIR
    environment:
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=elastic
      - ELASTICSEARCH_PASSWORD=$ELASTIC_PASSWORD
      - SERVER_SSL_ENABLED=true
      - SERVER_SSL_KEYSTORE_PATH=$CERTS_DIR/identity.p12
      - SERVER_SSL_KEYSTORE_PASSWORD=""
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=$CERTS_DIR/ca/ca.crt
    ports:
      - 5601:5601
    depends_on:
      - es01
      - es02
      - es03
    networks:
      - net3
      - net2

And now Kibana seems to be able to communicate with Elasticsearch. However, I have now the following error:

es01 log:

{"type": "server", "timestamp": "2020-04-29T22:06:30,786Z", "level": "INFO", "component": "o.e.x.s.a.AuthenticationService", "cluster.name": "es-docker-cluster", "node.name": "es01", "message": "Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]", "cluster.uuid": "eEehUpedTdStFm7PyLX-Og", "node.id": "tpffgBf3RrmcNvN9QVIrhw" }

And Kibana log:

{"type":"log","@timestamp":"2020-04-29T22:06:30Z","tags":["warning","plugins","licensing"],"pid":6,"message":"License information could not be obtained from Elasticsearch due to [security_exception] failed to authenticate user [elastic], with { header={ WWW-Authenticate={ 0="Bearer realm=\"security\"" & 1="ApiKey" & 2="Basic realm=\"security\" charset=\"UTF-8\"" } } } :: {"path":"/_xpack","statusCode":401,"response":"{\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"failed to authenticate user [elastic]\",\"header\":{\"WWW-Authenticate\":[\"Bearer realm=\\\"security\\\"\",\"ApiKey\",\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"]}}],\"type\":\"security_exception\",\"reason\":\"failed to authenticate user [elastic]\",\"header\":{\"WWW-Authenticate\":[\"Bearer realm=\\\"security\\\"\",\"ApiKey\",\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"]}},\"status\":401}","wwwAuthenticateDirective":"Bearer realm=\"security\", ApiKey, Basic realm=\"security\" charset=\"UTF-8\""} error"}

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.