Kibana 8.11.0 Failed To Start (Exit Code 1)

exit code 137 means Exit Out of Memory

Which can come after a while.. so the containers may start and run for a bit then crash...

So Silly question did you give Docker Desktop Enough Memory? At least 4 GB . then if you add logstash etc..etc.. you may need to add more maybe 6 or 8GB... If Docker runs out of memory it will crash the containers, if Elasticsearch runs out of memory it will crash.

Apologies it has been frustrating , we are just trying to help... :slight_smile:

I actually already give 7.53 GB to entire Docker Env.

How much higher i need to go? I have never seen the usage cross 7.53GB. Highest i have seen is 7.05GB

Couple Questions

What did you set the elasticsearch memory to in the .env file?

# 1GB
MEM_LIMIT=1073741824 /

Can you Show THis View?

Settings

Here is a potential problem: you added logstash to the compose and it maybe claim more memory before all the elasticsearch containers get to claim all of theirs... since it is not waiting for elasticsearch to start (that may have been a good dependency)

I would take out logstash for now... test and then proceed.

I would probably run logstash separately / not in the same compose anyway... you will be starting and stopping it a lot at first so no reason to start and stop the whole cluster every time.

Oh I see you parameterized the Logstash Memory what is that set to?

mem_limit: ${LS_MEM_LIMIT}

Sorry I dont see anything on Memory Limit v.425.0

.env

# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=elastic123

# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=kibana123

# Version of Elastic products
STACK_VERSION=8.11.0

# Set the cluster name
CLUSTER_NAME=docker-cluster

# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial

# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=127.0.0.1:9201

# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80

# Increase or decrease based on the available host memory (in bytes)
ES_MEM_LIMIT=4294967296
KB_MEM_LIMIT=1073741824
LS_MEM_LIMIT=1073741824

# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject

Not sure how to make LS sep container and then connect back to docker-env. You gotta explain more on what u mean and how to do it. Am into unfamiliar territory alr.

Now when I run docker-compose up -d, everything comes up together in Sequence.

Is seperation really necessary? Right now, logstash is only supposed to start after ES and Kib are running. Ali Younges already implemented a queue behaviour to prevent all containers starting tgt

request returned Internal Server Error for API route and version http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/126faa711bab0e72e450c97fff77bfc96bcc6a6018c457b740167a5f54a50ebc/json, check if the server supports the requested API version

Hi @Ethan777100

Yup that is your problem...

You gave each elasticsearch 4GB of memory... That is not MAX that is what it will require / take / claim and if it can not get it it will crash...

ES_MEM_LIMIT=4294967296

So 3 x 4GB - 12GB so you are way over your limit of 7.53GB by far

This exactly explains the behavior you are seeing!

If this is your laptop test, I would reduce ES_MEM_LIMIT to 1 GB or 2GB max, 2 GB will mean elasticsearch will take 6GB ...no less.

If this is just for fun you really don't need 3 nodes, If this is important ... you are running it on your laptop so there is not much real redundancy to running multiple nodes ... if your laptop / disk crashes etc... 3 nodes does not really matter.

Also you can see on windows it says

So you will need to manage windows docker resources with that.

Just as a side note I constantly run Elasticsarch + Kibana on my Laptop all the time.
I typically run 1 larger node like 2GB or 4GB... than 3 smaller nodes..
You cluster will complain about being "Yellow" no replicas but as I just explained on a Single Machine / Laptop that does not really matter.

I start / stop recreate all the time... sometimes I run they same clusters for weeks or longer docker + elasticsearch is very powerful and convenient once you get used to it.

I always have a way to re-create / reload the data if needed... but that said... you can run Elasticsearch for long periods with Docker on your LapTop

1 Like

To get to your %UserProfile% directory, in PowerShell, use cd ~ to access your home directory (which is typically your user profile, C:\Users<UserName>) or you can open Windows File Explorer and enter %UserProfile% in the address bar. The directory path should look something like: C:\Users<UserName>.wslconfig.

I can't even find my .wslconfig in the a/m directory path. This is what I mean. Instructions ask u find something in your machine but you cannot even find it and you are stuck on how to Account for the said file...

Even if you say - create your own - i dont even heck know how to start nor find the template to kick off from. I dont know what i dont know./

This is why I cursed the ambuiguity of online Computer Documentation so much. Everything is either assumed or implied.

Managed to reduce my ES container to just 1.

2023-11-13 00:38:53 [2023-11-12T16:38:53.789+00:00][INFO ][plugins.alerting] Registering resources for context "stack".
2023-11-13 00:38:53 [2023-11-12T16:38:53.846+00:00][WARN ][plugins.reporting.config] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
2023-11-13 00:38:53 [2023-11-12T16:38:53.851+00:00][WARN ][plugins.reporting.config] Found 'server.host: "0.0.0.0"' in Kibana configuration. Reporting is not able to use this as the Kibana server hostname. To enable PNG/PDF Reporting to work, 'xpack.reporting.kibanaServer.hostname: localhost' is automatically set in the configuration. You can prevent this message by adding 'xpack.reporting.kibanaServer.hostname: localhost' in kibana.yml.
2023-11-13 00:38:53 [2023-11-12T16:38:53.978+00:00][INFO ][plugins.cloudSecurityPosture] Registered task successfully [Task: cloud_security_posture-stats_task]
2023-11-13 00:38:54 [2023-11-12T16:38:54.079+00:00][INFO ][plugins.alerting] Registering resources for context "observability.slo".
2023-11-13 00:38:54 [2023-11-12T16:38:54.086+00:00][INFO ][plugins.alerting] Registering resources for context "observability.threshold".
2023-11-13 00:38:54 [2023-11-12T16:38:54.130+00:00][INFO ][plugins.alerting] Registering resources for context "ml.anomaly-detection".
2023-11-13 00:38:54 [2023-11-12T16:38:54.164+00:00][INFO ][plugins.alerting] Registering resources for context "observability.uptime".
2023-11-13 00:38:54 [2023-11-12T16:38:54.228+00:00][INFO ][plugins.alerting] Registering resources for context "observability.logs".
2023-11-13 00:38:54 [2023-11-12T16:38:54.234+00:00][INFO ][plugins.alerting] Registering resources for context "observability.metrics".
2023-11-13 00:38:54 [2023-11-12T16:38:54.472+00:00][INFO ][plugins.alerting] Registering resources for context "security".
2023-11-13 00:38:54 [2023-11-12T16:38:54.556+00:00][INFO ][plugins.assetManager] Server is NOT enabled
2023-11-13 00:38:54 [2023-11-12T16:38:54.573+00:00][INFO ][plugins.alerting] Registering resources for context "observability.apm".
2023-11-13 00:38:54 [2023-11-12T16:38:54.881+00:00][WARN ][plugins.screenshotting.config] Chromium sandbox provides an additional layer of protection, but is not supported for Linux Ubuntu 20.04 OS. Automatically setting 'xpack.screenshotting.browser.chromium.disableSandbox: true'.
2023-11-13 00:38:55 [2023-11-12T16:38:55.022+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exception
2023-11-13 00:38:55     Root causes:
2023-11-13 00:38:55             security_exception: unable to authenticate user [kibana_system] for REST request [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]
2023-11-13 00:38:57 [2023-11-12T16:38:57.119+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/node_modules/@kbn/screenshotting-plugin/chromium/headless_shell-linux_x64/headless_shell

Now Kibana Container is unhealthy. Kibana server is not ready yet.

Because of this, my logstash container auto crashed.

2023-11-13 00:37:31 Using bundled JDK: /usr/share/logstash/jdk
2023-11-13 00:38:20 Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
2023-11-13 00:38:21 [2023-11-12T16:38:21,044][WARN ][deprecation.logstash.runner] NOTICE: Running Logstash as superuser is not recommended and won't be allowed in the future. Set 'allow_superuser' to 'false' to avoid startup errors in future releases.
2023-11-13 00:38:21 [2023-11-12T16:38:21,099][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
2023-11-13 00:38:21 [2023-11-12T16:38:21,102][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.11.0", "jruby.version"=>"jruby 9.4.2.0 (3.1.0) 2023-03-08 90d2913fda OpenJDK 64-Bit Server VM 17.0.9+9 on 17.0.9+9 +indy +jit [x86_64-linux]"}
2023-11-13 00:38:21 [2023-11-12T16:38:21,109][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
2023-11-13 00:38:21 [2023-11-12T16:38:21,706][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
2023-11-13 00:38:23 [2023-11-12T16:38:23,641][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
2023-11-13 00:38:24 [2023-11-12T16:38:24,252][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [^\\r\\n], \"\\r\", \"\\n\" at line 38, column 4 (byte 739) after # }", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:239:in `initialize'", "org/logstash/execution/AbstractPipelineExt.java:173:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:in `initialize'", "org/jruby/RubyClass.java:931:in `new'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386:in `block in converge_state'"]}
2023-11-13 00:38:24 [2023-11-12T16:38:24,289][INFO ][logstash.runner          ] Logstash shut down.
2023-11-13 00:38:24 [2023-11-12T16:38:24,299][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
2023-11-13 00:38:24 org.jruby.exceptions.SystemExit: (SystemExit) exit
2023-11-13 00:38:24     at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:795) ~[jruby.jar:?]
2023-11-13 00:38:24     at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:758) ~[jruby.jar:?]
2023-11-13 00:38:24     at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:90) ~[?:?]
2023-11-13 00:37:31 2023/11/12 16:37:31 Setting 'xpack.monitoring.enabled' from environment.
2023-11-13 00:37:31 2023/11/12 16:37:31 Setting 'node.name' from environment.

Sorry but that is Docker + Windows documentation not elastic... Same if you are running redis, Kafka etc... We can't really document everything other tech you may use... You are not required to use docker for elasticsearch...anyways, this is the modern tech world we live in.

Perhaps you should share what you did? Did you edit it out, clean up the volumes and start from scratch? Can you share your new compose?

Again my suggestion is to get each one working then add the next, (i.e. take out logstash for now)

And my suggestion will to be to run logstash separately until you get it operating as you like...

These are just a suggestions ... but may save you frustration...

version: "3.8"

volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  esdata02:
    driver: local
  esdata03:
    driver: local
  kibanadata:
    driver: local
  logstashdata01:
    driver: local

networks:
  default:
    name: elastic
    external: false
    
services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es02\n"\
          "    dns:\n"\
          "      - es02\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es03\n"\
          "    dns:\n"\
          "      - es03\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: kibana\n"\
          "    dns:\n"\
          "      - kibana\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    labels:
      co.elastic.logs/module: elasticsearch
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01
      - discovery.seed_hosts=es02,es03
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${ES_MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  # es02:
  #   depends_on:
  #     - es01
  #   image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
  #   labels:
  #     co.elastic.logs/module: elasticsearch
  #   volumes:
  #     - certs:/usr/share/elasticsearch/config/certs
  #     - esdata02:/usr/share/elasticsearch/data
  #   environment:
  #     - node.name=es02
  #     - cluster.name=${CLUSTER_NAME}
  #     - cluster.initial_master_nodes=es01
  #     - discovery.seed_hosts=es01,es03
  #     - bootstrap.memory_lock=true
  #     - xpack.security.enabled=true
  #     - xpack.security.http.ssl.enabled=true
  #     - xpack.security.http.ssl.key=certs/es02/es02.key
  #     - xpack.security.http.ssl.certificate=certs/es02/es02.crt
  #     - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
  #     - xpack.security.transport.ssl.enabled=true
  #     - xpack.security.transport.ssl.key=certs/es02/es02.key
  #     - xpack.security.transport.ssl.certificate=certs/es02/es02.crt
  #     - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
  #     - xpack.security.transport.ssl.verification_mode=certificate
  #     - xpack.license.self_generated.type=${LICENSE}
  #   mem_limit: ${ES_MEM_LIMIT}
  #   ulimits:
  #     memlock:
  #       soft: -1
  #       hard: -1
  #   healthcheck:
  #     test:
  #       [
  #         "CMD-SHELL",
  #         "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
  #       ]
  #     interval: 10s
  #     timeout: 10s
  #     retries: 120

  # es03:
  #   depends_on:
  #     - es02
  #   image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
  #   labels:
  #     co.elastic.logs/module: elasticsearch
  #   volumes:
  #     - certs:/usr/share/elasticsearch/config/certs
  #     - esdata03:/usr/share/elasticsearch/data
  #   environment:
  #     - node.name=es03
  #     - cluster.name=${CLUSTER_NAME}
  #     - cluster.initial_master_nodes=es01
  #     - discovery.seed_hosts=es01,es02
  #     - bootstrap.memory_lock=true
  #     - xpack.security.enabled=true
  #     - xpack.security.http.ssl.enabled=true
  #     - xpack.security.http.ssl.key=certs/es03/es03.key
  #     - xpack.security.http.ssl.certificate=certs/es03/es03.crt
  #     - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
  #     - xpack.security.transport.ssl.enabled=true
  #     - xpack.security.transport.ssl.key=certs/es03/es03.key
  #     - xpack.security.transport.ssl.certificate=certs/es03/es03.crt
  #     - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
  #     - xpack.security.transport.ssl.verification_mode=certificate
  #     - xpack.license.self_generated.type=${LICENSE}
  #   mem_limit: ${ES_MEM_LIMIT}
  #   ulimits:
  #     memlock:
  #       soft: -1
  #       hard: -1
  #   healthcheck:
  #     test:
  #       [
  #         "CMD-SHELL",
  #         "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
  #       ]
  #     interval: 10s
  #     timeout: 10s
  #     retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    labels:
      co.elastic.logs/module: kibana
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
    mem_limit: ${KB_MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 5s
      timeout: 10s
      retries: 10

  logstash:
    depends_on: 
      es01:
        condition: service_healthy
    image: docker.elastic.co/logstash/logstash:${STACK_VERSION}
    labels:
      co.elastic.logs/module: logstash
    user: root
    volumes:
      - logstashdata01:/usr/share/logstash/data
      - certs:/usr/share/logstash/certs
      - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro
    environment:
      - NODE_NAME="logstash"
      - xpack.monitoring.enabled=false
      - ELASTIC_USER=elastic
      - ELASTIC_PASSWORD={ELASTIC_PASSWORD}
      - ELASTIC_HOSTS=https://es01:9200
    command: logstash -f /usr/share/logstash/pipeline/logstash.conf
    ports:
      - "5044:5044/udp"
    mem_limit: ${LS_MEM_LIMIT}

@Ethan777100

Here is my working single node based on yours
I cleaned up some configs that are no longer correct (I put a few comments in)
I cleared the Volumes
Took out logstash (there are most likely issues there)
Elasticsearch and Kibana Starts up and is stable.

version: "3.8"

volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  kibanadata:
    driver: local
  logstashdata01:
    driver: local

networks:
  default:
    name: elastic
    external: false
    
services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    labels:
      co.elastic.logs/module: elasticsearch
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      # Better for Single Node
      - discovery.type=single-node
      #- cluster.initial_master_nodes=es01
      # - discovery.seed_hosts=es02,es03 # These nodes no longer exist 
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${ES_MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    labels:
      co.elastic.logs/module: kibana
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
    mem_limit: ${KB_MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 5s
      timeout: 10s
      retries: 10

  # logstash:
  #   depends_on: 
  #     es01:
  #       condition: service_healthy
  #   image: docker.elastic.co/logstash/logstash:${STACK_VERSION}
  #   labels:
  #     co.elastic.logs/module: logstash
  #   user: root
  #   volumes:
  #     - logstashdata01:/usr/share/logstash/data
  #     - certs:/usr/share/logstash/certs
  #     - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro
  #   environment:
  #     - NODE_NAME="logstash"
  #     - xpack.monitoring.enabled=false
  #     - ELASTIC_USER=elastic
  #     - ELASTIC_PASSWORD={ELASTIC_PASSWORD}
  #     - ELASTIC_HOSTS=https://es01:9200
  #   command: logstash -f /usr/share/logstash/pipeline/logstash.conf
  #   ports:
  #     - "5044:5044"
  #   mem_limit: ${LS_MEM_LIMIT}

Again I would suggest running logstash separately...

You have a separate thread on that perhaps someone will help there...

1 Like

This looks like you did not use my new settings....
And / or did not clean out ALL the volume mounts including the data mounts

These lines I added / fixed

      # Better for Single Node
      - discovery.type=single-node
      #- cluster.initial_master_nodes=es01
      # - discovery.seed_hosts=es02,es03 # These nodes no longer exist 

Explicity tell the single node to not to try to form a cluster.

That error message shows that it still thinks it is part of a cluster... and now is "confused" as the config says single-node but the previous state says it is still part of a cluster...

Clean out ALL the volume mounts and try again.. make sure they are gone before you start again...

Scratch that. I did a clean wipe of everything

> docker-compose down -v

and restarted whilst u replied me. All looks good.

image

1 Like

So if you are going to load a lot of data / you could up the memory to 2GB might help a bit... but you can do that at anytime.. Just stop the compose edit them memory limit then start

Also your cluster will often report as "yellow" which simply means no replica shards which is fine for the single laptop.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.