On Docker containers. Kibana connect to cloud instead the Elasticsearch container

Hi,
I want to install and run locally Elasticsearch and Kibana with Docker.
I've tried several methods and reinstalled everything, many times with no success.
I can't get Kibana to connect to the elasticsearch in the other container, at least not to the same cluster I'm connecting through an API directed to localhost:9200 from the host machine.

I'm following the instructions on running elasticsearch with docker locally

All the commands work well with no errors:

docker network create elastic
docker pull docker.elastic.co/elasticsearch/elasticsearch:8.11.1
docker run --name elasticsearch --net elastic -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -t docker.elastic.co/elasticsearch/elasticsearch:8.11.1

at running kibana for the first time shows an strange address 172.18.0.2:9200


Enrollment token

eyJ2ZXIiOiI4LjExLjEiLCJhZHIiOlsiMTcyLjE4LjAuMjo5MjAwIl0sImZnciI6IjlmZmNkMTQxYjc4MjZmYjMxYmYzZGM3ODkwN2IxOWE4NTg1ZTZlYjc1MTc0ZDM2OGNjNDgwZmM0YzZkODMzNzMiLCJrZXkiOiJXYm94SUl3Qm52Y3Z5UHI5NW1JbTpxV25lMjF0bFFkeWNaNEwtaVFzM01nIn0=
Connect to

https://172.18.0.2:9200

now entering localhost:9200 shows the following:

{
  "name" : "DESKTOP-JP",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "XlWLyCIBQHuPunax-5MnXQ",
  "version" : {
    "number" : "7.17.0",
    "build_flavor" : "default",
    "build_type" : "zip",
    "build_hash" : "bee86328705acaa9a6daede7140defd4d9ec56bd",
    "build_date" : "2022-01-28T08:36:04.875279988Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

But when I run in Kibana/Dev Tools
Get/

the result is different:

{
  "name": "ac2076a2c0c4",
  "cluster_name": "docker-cluster",
  "cluster_uuid": "Skx_ME9hSNiruW67nzAjkg",
  "version": {
    "number": "8.11.1",
    "build_flavor": "default",
    "build_type": "docker",
    "build_hash": "6f9ff581fbcde658e6f69d6ce03050f060d1fd0c",
    "build_date": "2023-11-11T10:05:59.421038163Z",
    "build_snapshot": false,
    "lucene_version": "9.8.0",
    "minimum_wire_compatibility_version": "7.17.0",
    "minimum_index_compatibility_version": "7.0.0"
  },
  "tagline": "You Know, for Search"
}

When accesing to the API, the cluster name is elasticsearch, at localhost from the host machine where the containers are.

How can I get Kibana to connect to the same Elasticsearch node than the API?

I think you didn't explain how you set up your kibana.yml, but since it doesn't connect to the cluster you want, it must have some invalid information.

The instructions I recommend are to use docker-compose for Elasticsearch nodes and Kibana.

Read the docs on running Elasticsearch in Docker-Compose: Change Index Ingestion method
And the docs on running Kibana in Docker-compose: Change Index Ingestion method

Having all the services declared in a docker-compose.yml file will simplify the network configuration.

Thank you very much Tim.

Install Elasticsearch with Docker | Elasticsearch Guide [8.11] | Elastic

But still I have the same issue. the api connect to the same cluster as in

{


  "name" : "DESKTOP-JP",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "XlWLyCIBQHuPunax-5MnXQ",
  "version" : {
    "number" : "7.17.0",
    "build_flavor" : "default",
    "build_type" : "zip",
    "build_hash" : "bee86328705acaa9a6daede7140defd4d9ec56bd",
    "build_date" : "2022-01-28T08:36:04.875279988Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

but kibabna get to:

{
  "name": "es01",
  "cluster_name": "elasticsearch",
  "cluster_uuid": "HlLS1FcWTIC06vRD3dfrdw",
  "version": {
    "number": "8.11.1",
    "build_flavor": "default",
    "build_type": "docker",
    "build_hash": "6f9ff581fbcde658e6f69d6ce03050f060d1fd0c",
    "build_date": "2023-11-11T10:05:59.421038163Z",
    "build_snapshot": false,
    "lucene_version": "9.8.0",
    "minimum_wire_compatibility_version": "7.17.0",
    "minimum_index_compatibility_version": "7.0.0"
  },
  "tagline": "You Know, for Search"
}

And of course, not seeing the same indexes...

the .env file is:

# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=sad8f98sa79gs

# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=sad8f98sa79gs

# Version of Elastic products
STACK_VERSION=8.11.1

# Set the cluster name
CLUSTER_NAME=elasticsearch

# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial

# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=127.0.0.1:9200

# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80

# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=1073741824

# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=docker-compose-jp

the docker-compose.yms is

version: "2.2"

services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es02\n"\
          "    dns:\n"\
          "      - es02\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es03\n"\
          "    dns:\n"\
          "      - es03\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es02,es03
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  es02:
    depends_on:
      - es01
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata02:/usr/share/elasticsearch/data
    environment:
      - node.name=es02
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es03
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es02/es02.key
      - xpack.security.http.ssl.certificate=certs/es02/es02.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es02/es02.key
      - xpack.security.transport.ssl.certificate=certs/es02/es02.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  es03:
    depends_on:
      - es02
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata03:/usr/share/elasticsearch/data
    environment:
      - node.name=es03
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es02
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es03/es03.key
      - xpack.security.http.ssl.certificate=certs/es03/es03.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es03/es03.key
      - xpack.security.transport.ssl.certificate=certs/es03/es03.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
      es02:
        condition: service_healthy
      es03:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  esdata02:
    driver: local
  esdata03:
    driver: local
  kibanadata:
    driver: local

How can I get kibana to use the same elasticsearch cluster?

Hi @Juan_Pablo_Scodelari

I suspect you have 2 elasticsearch running... one in docker and 1 on your laptop directly...

Can you show the exact full command when you get "name" : "DESKTOP-JP",.

And Where are you running that from?
What OS is your laptop?

I always ask folks to show both the command and the results....

Thank you @stephenb . You were right. There was a Elasticsearch running locally.
I already remove it that.

I made all the process again for docker-compose.

I can get to kibana at localhost:5601/app/dev_tools#/console. Even clicking on port link in Docker Desktop. (Windows 11)

But for elasticsearch, there was a trick as the link in Docker Desktop doesn't work. I finally got it right at localhost//es01.localhost:9200/

Thank you very much!!

1 Like

Docker networking can be tricky....

According to your docker compose you should be able to access from your laptop

curl -k -u elastic https://localhost:9200

but glad it is working

@stephenb. I got it worked between kibana and elasticsearch in their respective docker containers.

Now I'm strugling trying to connect through my app in .net.

When I try to get the indices I get the error:

Invalid NEST response built from a unsuccessful () low level call on GET: /_cat/indices

My code after many trials, all with the same result, is:

        public static String sha256_hash(String value)
        {
            StringBuilder Sb = new StringBuilder();

            using (SHA256 hash = SHA256Managed.Create())
            {
                Encoding enc = Encoding.UTF8;
                Byte[] result = hash.ComputeHash(enc.GetBytes(value));

                foreach (Byte b in result)
                    Sb.Append(b.ToString("x2"));
            }

            return Sb.ToString();
        }

        private void InitElastic()
        {
            string certificateString = "MIIDQTCCAimgAwIBAgIVALZvBRUf9xVctWvOHM4rbqoUV28KMA0GCSqGSIb3DQEBCwUAMDQxMjAwBgNVBAMTKUVsYXN0aWMgQ2VydGlmaWNhdGUgVG9vbCBBdXRvZ2VuZXJhdGVkIENBMB4XDTIzMTIwMjE1MTYyOFoXDTI2MTIwMTE1MTYyOFowDzENMAsGA1UEAxMEZXMwMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANtnJNEKLpk0gpyOcDmyO9xQI0uTpQRamDeDOXaSVyBHywCQpLCX8bvLWYvrl+GahhiWtZPviRtKnRuBUk1TC2qxcLNb+BvB28bGX7vtrP4VKjYKaacbv7BEFVf9x899T5vY/M44GF9XqzsFALj9fOOjE2s8ZaSE81IgnJt/O2MevBM/vLOf5Nz7npH0Xsg80c4pPVGVHOeKb7qDfpoxrxWncLc8Mr0/zVzLXOBv+N9NgfOS6t91u1qB7MfNfyyXuen0eKp8ro7BrVxKMGH35sNteMixlfrdAOlwdimxFVKaVeTqJ5RCJXCgV7kor4jckpJsxDDdl1Mgn8GXw6giNYkCAwEAAaNvMG0wHQYDVR0OBBYEFIN19ozc/68yws64gDKyFEAhg53PMB8GA1UdIwQYMBaAFHVnYs69wVHCIZNiUdFrjBqqlqPOMCAGA1UdEQQZMBeCCWxvY2FsaG9zdIcEfwAAAYIEZXMwMTAJBgNVHRMEAjAAMA0GCSqGSIb3DQEBCwUAA4IBAQCyCTzu7eMbTLs51oYzQt3cmJUD9IagsAUVq02shqwt8o+UkfhINN6+g+4UHsB+1w5NAoO1NFxxyLR2A0bXgKM5fSoR/YTl6zUcuYG6puryjD3Mvr3HCOAOyZYYpKX1fedAfuUQU16p/tlykyRuMOVuEG8uTrsmoNvWo9uyEo2O4fvtXkBet/GefG29S8abomUwFHTzytIcMT2wHA9ArZ+vcs3RD0E/cms9kDGeysMtJaMMwCpdSbdYEhujupX3wXo/VyPBMKx6jfUoDJVwQLbFix+Xe0fTJ4vw9XZ/HigHWjZjmR570VBcl3/D4RICtXqGOaSbhVSOyFzaPTKEDOzL";                    

            IConnectionSettingsValues settings = new ConnectionSettings(new Uri("https://es01:9200"))
               .BasicAuthentication("elastic", "sad8f98sa79gs")
               .EnableApiVersioningHeader()         
               .CertificateFingerprint(sha256_hash(certificateString));
            client = new ElasticClient(settings);

        }

I'm getting the certificate string at:

/usr/share/elasticsearch/config/certs/es01/es01.crt

What am I missing or what I'm doing wrong?

Hi @Juan_Pablo_Scodelari

Typically you should open a new topic on something like this....

So, if you are trying to create a trusted fingerprint.

You need to do that from the CA not the cert.

/usr/share/elasticsearch/config/certs/ca/ca.crt

That needs to be created from the CA not the cert it self...

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.