Install Elasticsearch with Docker

tried it again..

root@dos:/opt/elk1# docker-compose down
root@dos:/opt/elk1# docker-compose up -d
[+] Running 4/4
 āœ” Network elk1_default     Created                                                                                                                                                                        0.1s
 āœ” Container elk1-setup-1   Healthy                                                                                                                                                                        2.1s
 āœ” Container elk1-es01-1    Healthy                                                                                                                                                                       33.0s
 āœ” Container elk1-kibana-1  Started                                                                                                                                                                       33.4s
root@dos:/opt/elk1# curl localhost:9200
curl: (7) Failed to connect to localhost port 9200 after 0 ms: Couldn't connect to server
root@dos:/opt/elk1# docker ps -a | grep elk1-es01-1
b79add8f463e   "/bin/tini -- /usr/lā€¦"   2 minutes ago   Exited (137) 2 minutes ago                                                                                                            elk1-es01-1
root@dos:/opt/elk1# docker logs elk1-es01-1 | tail

ERROR: Elasticsearch exited unexpectedly
{"@timestamp":"2023-09-08T00:11:41.084Z", "log.level": "INFO", "message":"adding index lifecycle policy [.deprecation-indexing-ilm-policy]", "ecs.version": "1.2.0","":"ES_ECS","event.dataset":"elasticsearch.server","":"elasticsearch[es01][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.xpack.ilm.action.TransportPutLifecycleAction","elasticsearch.cluster.uuid":"uzF2GsAaT3GFpbi-qQTThg","":"WqcZphI3QdmT_j1E-pjZfg","":"es01","":"kalei"}
{"@timestamp":"2023-09-08T00:11:41.111Z", "log.level": "INFO", "message":"adding index lifecycle policy [.fleet-files-ilm-policy]", "ecs.version": "1.2.0","":"ES_ECS","event.dataset":"elasticsearch.server","":"elasticsearch[es01][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.xpack.ilm.action.TransportPutLifecycleAction","elasticsearch.cluster.uuid":"uzF2GsAaT3GFpbi-qQTThg","":"WqcZphI3QdmT_j1E-pjZfg","":"es01","":"kalei"}
{"@timestamp":"2023-09-08T00:11:41.138Z", "log.level": "INFO", "message":"adding index lifecycle policy [.fleet-file-data-ilm-policy]", "ecs.version": "1.2.0","":"ES_ECS","event.dataset":"elasticsearch.server","":"elasticsearch[es01][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.xpack.ilm.action.TransportPutLifecycleAction","elasticsearch.cluster.uuid":"uzF2GsAaT3GFpbi-qQTThg","":"WqcZphI3QdmT_j1E-pjZfg","":"es01","":"kalei"}
{"@timestamp":"2023-09-08T00:11:41.166Z", "log.level": "INFO", "message":"adding index lifecycle policy [.fleet-actions-results-ilm-policy]", "ecs.version": "1.2.0","":"ES_ECS","event.dataset":"elasticsearch.server","":"elasticsearch[es01][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.xpack.ilm.action.TransportPutLifecycleAction","elasticsearch.cluster.uuid":"uzF2GsAaT3GFpbi-qQTThg","":"WqcZphI3QdmT_j1E-pjZfg","":"es01","":"kalei"}
{"@timestamp":"2023-09-08T00:11:41.252Z", "log.level": "INFO", "message":"Node [{es01}{WqcZphI3QdmT_j1E-pjZfg}] is selected as the current health node.", "ecs.version": "1.2.0","":"ES_ECS","event.dataset":"elasticsearch.server","":"elasticsearch[es01][management][T#2]","log.logger":"","elasticsearch.cluster.uuid":"uzF2GsAaT3GFpbi-qQTThg","":"WqcZphI3QdmT_j1E-pjZfg","":"es01","":"kalei"}
{"@timestamp":"2023-09-08T00:11:41.334Z", "log.level": "INFO", "message":"license [5240dede-06ba-44d8-be7c-4a08895a80ea] mode [basic] - valid", "ecs.version": "1.2.0","":"ES_ECS","event.dataset":"elasticsearch.server","":"elasticsearch[es01][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.license.ClusterStateLicenseService","elasticsearch.cluster.uuid":"uzF2GsAaT3GFpbi-qQTThg","":"WqcZphI3QdmT_j1E-pjZfg","":"es01","":"kalei"}
{"@timestamp":"2023-09-08T00:11:41.335Z", "log.level": "INFO", "message":"license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]", "ecs.version": "1.2.0","":"ES_ECS","event.dataset":"elasticsearch.server","":"elasticsearch[es01][clusterApplierService#updateTask][T#1]","log.logger":"","elasticsearch.cluster.uuid":"uzF2GsAaT3GFpbi-qQTThg","":"WqcZphI3QdmT_j1E-pjZfg","":"es01","":"kalei"}
{"@timestamp":"2023-09-08T00:11:59.607Z", "log.level": "INFO", "message":"security index does not exist, creating [.security-7] with alias [.security]", "ecs.version": "1.2.0","":"ES_ECS","event.dataset":"elasticsearch.server","":"elasticsearch[es01][transport_worker][T#5]","log.logger":"","elasticsearch.cluster.uuid":"uzF2GsAaT3GFpbi-qQTThg","":"WqcZphI3QdmT_j1E-pjZfg","":"es01","":"kalei"}
{"@timestamp":"2023-09-08T00:11:59.671Z", "log.level": "INFO", "message":"[.security-7] creating index, cause [api], templates [], shards [1]/[0]", "ecs.version": "1.2.0","":"ES_ECS","event.dataset":"elasticsearch.server","":"elasticsearch[es01][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.metadata.MetadataCreateIndexService","elasticsearch.cluster.uuid":"uzF2GsAaT3GFpbi-qQTThg","":"WqcZphI3QdmT_j1E-pjZfg","":"es01","":"kalei"}
{"@timestamp":"2023-09-08T00:11:59.929Z", "log.level": "INFO",  "":"GREEN","message":"Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.security-7][0]]]).","":"YELLOW","reason":"shards started [[.security-7][0]]" , "ecs.version": "1.2.0","":"ES_ECS","event.dataset":"elasticsearch.server","":"elasticsearch[es01][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.routing.allocation.AllocationService","elasticsearch.cluster.uuid":"uzF2GsAaT3GFpbi-qQTThg","":"WqcZphI3QdmT_j1E-pjZfg","":"es01","":"kalei"}

Yes Interesting

Again, to me when I see exited unexpectedly I think OOM (Out of Memory)

Your Node / Cluster went green before it quit/died.

So let's try another experiment here is a simple compose with no Security, No Authentication, No SSL ... nothing and some other setting please try to run this and see what happens

Clean everything up beforehand run it and see what happens.

It starts an Elastic and Kibana with no security

(BTW I noticed in the .env the version is 8.9.1 and in your recent logs it is 8.8.1 ... are you sure you are running what you think you are?

run it with this command
TAG=8.9.1 docker-compose -f es-kb-nosec.yml up


version: '3'
    container_name: es01
    # 8.x
    environment: ['CLI_JAVA_OPTS=-Xms1g -Xmx1g','bootstrap.memory_lock=true','discovery.type=single-node','', '']
      - 9200:9200
      - elastic
        soft: -1
        hard: -1
        soft: 65536
        hard: 65536
            cpus: '2.0'
            cpus: '1.0'

    container_name: kib01
      XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: d1a66dfd-c4d3-4a0a-8290-2abcb83ab3aa
      - 5601:5601
      - elastic
            cpus: '2.0'
            cpus: '1.0'



I'm completely new to elastic stack and STILL learning, but I was able to standup a elastic stack cluster with 2 es nodes through docker images. I also have kibana, logstash, elastic-agent (fleet-server), and package registry as containers.

But I didn't go the docker-compose.yml route. I just ran the containers individually, mounted the volumes to my host, then started securing the TLS communications from scratch.

This was the only way that worked for me and I tried multiple ways. Securing communications bewteen es nodes, es to kibana, es to elastic-agent etc, was just hard to wrap my head around. And I know there's still things I need to fix, but for now, my kibana instance is healthy and my es cluster is healthy. It was also on RHEL 9 which made it more difficult.

I can go more into detail if you want to go that route instead of the docker-compose route.

BTW, this took months to figure out because I was just pretty much teaching myself through trial and error and Elastic Stack is just a beast on it's own. So throwing docker images, yml files, OS securities, etc on top of that, make it that much harder.

root@dos:/opt/elk# for volume in certs esdata01 esdata02 esdata03 kibanadata ; do docker volume rm elk_$volume ; done

I was trying older version in case of a recent bug that was introduced, but that was not the case and as of now i actually update it to 8.9.2

root@dos:/opt/elk# grep STACK_VERSION .env
root@dos:/opt/elk# TAG=8.9.2 docker-compose -f es-kb-nosec.yml up -d
[+] Running 3/3
 āœ” Network elk_elastic  Created                                                                                                                                                                                                                                                                     0.1s
 āœ” Container kib01      Started                                                                                                                                                                                                                                                                     0.6s
 āœ” Container es01       Started                                                                                                                                                                                                                                                                     0.6s

single node always works:

root@dos:/opt/elk# TAG=8.9.2 docker-compose -f es-kb-nosec.yml ps
NAME                IMAGE                                                 COMMAND                  SERVICE             CREATED             STATUS              PORTS
es01         "/bin/tini -- /usr/lā€¦"   elasticsearch       3 minutes ago       Up 3 minutes>9200/tcp, :::9200->9200/tcp, 9300/tcp
kib01                      "/bin/tini -- /usr/lā€¦"   kibana              3 minutes ago       Up 3 minutes>5601/tcp, :::5601->5601/tcp
root@dos:/opt/elk# curl -I localhost:9200
HTTP/1.1 200 OK
X-elastic-product: Elasticsearch
content-type: application/json
content-length: 539

root@dos:/opt/elk# curl localhost:9200
  "name" : "10069377c907",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "2XbApXIMRoCVDieCUnPKqg",
  "version" : {
    "number" : "8.9.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "e8179018838f55b8820685f92e245abef3bddc0f",
    "build_date" : "2023-08-31T02:43:14.210479707Z",
    "build_snapshot" : false,
    "lucene_version" : "9.7.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  "tagline" : "You Know, for Search"

To be clear, that is a single node with no security.

No That is not correct.

If I'm reading correctly above here when you try a single node with security enabled, it failed on the Ubuntu server

So that is two different results on the Ubuntu... And I thought you said it worked on you laptop so perhaps there was something fundamental not correct on the Ubuntu box.

So now the next test will be multi-node with no security on Ubuntu

The result of that will tell us much. It'll basically tell us whether it's about the networking or about the security or both.

It's late. I don't have multinode no security compose but should be pretty easy to put together... If not, perhaps I can take a look tomorrow

I'd like to thank you for not giving up and help me try to troubleshoot my environment)

I've got it to work! I double MEM_LIMIT in .env file and all 3 nodes were able to come online and form the cluster :wink:

Oh, I also bump my GCP instance type from e2-standard-8 to n2-standard-8 as well, instance have about 32GB and 30GB of memory respectively, I have few GB free, after running 3 nodes) which will be used by Linux Kernel, cache, etc...

Thanks again!

1 Like
root@dos:/opt/elk# docker-compose ps
NAME                IMAGE                                                  COMMAND                  SERVICE             CREATED             STATUS                    PORTS
elk-es01-1   "/bin/tini -- /usr/lā€¦"   es01                50 minutes ago      Up 50 minutes (healthy)>9200/tcp, :::9200->9200/tcp, 9300/tcp
elk-es02-1   "/bin/tini -- /usr/lā€¦"   es02                50 minutes ago      Up 50 minutes (healthy)   9200/tcp, 9300/tcp
elk-es03-1   "/bin/tini -- /usr/lā€¦"   es03                50 minutes ago      Up 50 minutes (healthy)   9200/tcp, 9300/tcp
elk-kibana-1                 "/bin/tini -- /usr/lā€¦"   kibana              50 minutes ago      Up 49 minutes (healthy)>5601/tcp, :::5601->5601/tcp

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.