Domain Name Configuration with Docker on EC2

Hi - I am testing the deployment of an Enterprise Search stack to an EC2 using the docker-compose example outlined here. The stack runs great on the EC2 and I can access everything if I hit the EC2 IP address directly. The problem is when I try to load Kibana via a URL. The response comes back with a mixture of my actual URL:5601 and localhost:5601. It seems like a simple config option I am missing somewhere.

Also, even some assets that are linking to mydomain.com:5601 are 404ing. Not sure if that is related.

Attached is what I'm seeing in the network tab and below is my docker-compose.

Thanks :slight_smile:

version: "2.2"

services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=none
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
            "CMD-SHELL",
            "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
      - SERVER_PUBLICBASEURL=https://my-domain.com.com
      - SERVER_HOST=0.0.0.0
      - ENTERPRISESEARCH_HOST=https://my-domain.com.com:${ENTERPRISE_SEARCH_PORT}
      - kibana.external_url=https://my-domain.com.com:5601
      - app_search.external_url=https://my-domain.com.com:9200
      - ent_search.external_url=https://my-domain.com.com:3002
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
            "CMD-SHELL",
            "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  enterprisesearch:
    depends_on:
      es01:
        condition: service_healthy
      kibana:
        condition: service_healthy
    image: docker.elastic.co/enterprise-search/enterprise-search:${STACK_VERSION}
    volumes:
      - certs:/usr/share/enterprise-search/config/certs
      - enterprisesearchdata:/usr/share/enterprise-search/config
    ports:
      - ${ENTERPRISE_SEARCH_PORT}:3002
    environment:
      - SERVERNAME=enterprisesearch
      - secret_management.encryption_keys=[${ENCRYPTION_KEYS}]
      - allow_es_settings_modification=true
      - elasticsearch.host=https://es01:9200
      - elasticsearch.username=elastic
      - elasticsearch.password=${ELASTIC_PASSWORD}
      - elasticsearch.ssl.enabled=true
      - elasticsearch.ssl.certificate_authority=/usr/share/enterprise-search/config/certs/ca/ca.crt
      - kibana.external_url=https://my-domain.com.com:5601
      - app_search.external_url=https://my-domain.com.com:9200
      - ent_search.external_url=https://my-domain.com.com:3002
      - ent_search.listen_host=0.0.0.0
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
            "CMD-SHELL",
            "curl -s -I http://localhost:3002 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

volumes:
  certs:
    driver: local
  enterprisesearchdata:
    driver: local
  esdata01:
    driver: local
  kibanadata:
    driver: local

@tchocky you have here an example of docker-compose to start Enterprise Search

Hi @Diana_Jourdan - Thanks for the reply. The example you linked to looks the same as the example I linked to. Any ideas on what config options I'm missing to support a fully qualified domain name and not just localhost?

Thanks.

@tchocky you need to make sure that your ent_search.external_url for Enterprise Search and SERVER_PUBLICBASEURL for Kibana are referencing your FQDN, not localhost or the private IP. You'll also need to make sure that your docker compose is set up to allow the host machine to bind its 9200, 5601, and 3002 ports to those ports in the relevant containers for Elasticsearch, Kibana, and Enterprise Search, respectively.

If you find that this is too technically difficult to navigate, I suggest you look into using Elastic Cloud, which will abstract away all the deployment concerns, and let you just focus on using the tools.

@Sean_Story - as you can see in the docker-compose I included in my original post those settings are already set to my FQDN. My ports are also properly bound. If I hit the EC2 IP address directly everything works. Any other ideas are greatly appreciated. Thanks.

@tchocky I can see from your original post:

which makes it clear that you've (rightfully) replaced your FQDN with something generic, which means there's plenty of chance that your actual file has a typo, or something other than an FQDN in it. That's why I suggested that you ensure that these are all set correctly, as I cannot verify from the modified snippet you have shared.

If I hit the EC2 IP address directly everything works

Does kibana's UI load if you use the EC2 IP address instead of the FQDN? If so, that makes me think you've exposed the IP but not the FQDN as the SERVER_PUBLICBASEURL.

Are you using something like nginx to manage your network for this host? Perhaps something is awry there?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.