Docker compose ELK

HELLO,
here is my docker-compose, I need to add authentication.

BONJOUR
voici mon docker-compose j'ai besoin d'ajouter l'authentification

type or paste code here

version: '3.7'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.13.2
    container_name: elasticsearch
    environment:
      - node.name=es01
      - cluster.name=docker-cluster
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms4g -Xmx4g"
      - xpack.security.enabled=false
      - bootstrap.memory_lock=true
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9300:9300
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:9200 || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
    logging:
      driver: json-file
      options:
        max-size: "1g"
        max-file: "3"

  logstash:
    image: docker.elastic.co/logstash/logstash:8.13.2
    container_name: logstash
    depends_on:
      elasticsearch:
        condition: service_healthy
    ports:
      - "5044:5044"
    environment:
      - xpack.monitoring.elasticsearch.hosts=http://elasticsearch:9200
      - LS_JAVA_OPTS=-Xms4g -Xmx4g
    volumes:
     - ./logstash/pipeline:/usr/share/logstash/pipeline
    logging:
      driver: json-file
      options:
        max-size: "1g"
        max-file: "3"

  kibana:
    image: docker.elastic.co/kibana/kibana:8.13.2
    container_name: kibana
    ports:
      - "5601:5601"
    environment:
      - ELASTICSEARCH_URL=http://elasticsearch:9200
    depends_on:
      elasticsearch:
        condition: service_healthy
    logging:
      driver: json-file
      options:
        max-size: "1g"
        max-file: "3"

volumes:
  esdata:
    driver: local

Bonjour :wave:t3:

The default docker compose example in the documentation is secured by default.

May be start from there?

if I understand correctly, migrating to version 8.15 will automatically activate security

Uploading: elk1.PNG…
Uploading: ELK2.PNG…
Uploading: erreur.PNG…

Not only. 8.13 does as well.
But use the files from the documentation instead of your own configuration.

Then remove what you don't need (like the 2 other nodes) and add what you need, Logstash I think.

To tell the truth, I can't do it at all. But if I had a docker-compose file already configured it wouldn't help me anymore. In our country (in Guinea) no one knows this tool. I think it would be an asset and a way to integrate it into our country.

I don't understand what you meant.

Good morning
I said that I took courses on elasticsearch but in French-speaking areas, especially in Africa, people don't know about elasticsearch.
I think that if I manage to understand how it works I think that I will be able to convince several of my colleagues to join this brilliant tool.
But I also said that I can't always activate the security.

We can speak in French in Discussions en français :wink:

Did you try my solution? It uses security by default.

Well then, if I could continue in French it would be better because my English is not too good.
And regarding your solution I tried but it doesn't work for the moment.

I noticed when I activate - xpack.security.enabled=true it doesn't work anymore

From Logstash to Discussions en français

Removed docker, elastic-stack-security

Est-ce que tu as bien téléchargé les deux fichiers présents dans la documentation ?
Puis lancé:

docker compose up

?

oui j'ai essayer mais sa ne marche pas voici ce que j'ai fais

[billing@ogn-prebilling 8.13.0]$ cat docker-compose.yml
version: "2.2"

services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;
        find . -type f -exec chmod 640 \{\} \;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://elasticsearch:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://elasticsearch:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 30s
      timeout: 10s
      retries: 3
    logging:
      driver: json-file
      options:
        max-size: "1g"
        max-file: "3"

  elasticsearch:  # Service Elasticsearch nommé elasticsearch
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01
      - discovery.seed_hosts=es01
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  logstash:
    image: docker.elastic.co/logstash/logstash:8.13.2
    container_name: logstash
    depends_on:
      elasticsearch:  # Mise à jour ici
        condition: service_healthy
    ports:
      - "5044:5044"
    environment:
      - xpack.monitoring.elasticsearch.hosts=http://elasticsearch:9200  # Mise à jour ici
      - LS_JAVA_OPTS=-Xms4g -Xmx4g
    volumes:
     - ./logstash/pipeline:/usr/share/logstash/pipeline
    logging:
      driver: json-file
      options:
        max-size: "1g"
        max-file: "3"

  kibana:
    depends_on:
      elasticsearch:  # Mise à jour ici
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://elasticsearch:9200  # Mise à jour ici
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
    logging:
      driver: json-file
      options:
        max-size: "1g"
        max-file: "3"

volumes:
  certs:
    driver: local
  esdata:
    driver: local
  kibanadata:
    driver: local

[billing@ogn-prebilling 8.13.0]$

I had even forgotten that we were on a discussion in English here is the docker-composer

and here is the error it displays

[billing@ogn-prebilling 8.13.0]$ docker compose logs | grep -i 'error'
WARN[0000] /home/billing/8.13.0/docker-compose.yml: `version` is obsolete
kibana-1  | [2024-10-21T09:09:16.886+00:00][INFO ][plugins.notifications] Email Service Error: Email connector not specified.
kibana-1  | [2024-10-21T09:09:46.277+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. Hostname/IP does not match certificate's altnames: Host: elasticsearch. is not in the cert's altnames: DNS:es01, IP Address:127.0.0.1, DNS:localhost
kibana-1  | [2024-10-21T09:28:58.789+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 172.27.0.3:9200
kibana-1  | [2024-10-21T09:29:07.014+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. getaddrinfo EAI_AGAIN elasticsearch
kibana-1  | [2024-10-21T09:32:33.764+00:00][INFO ][plugins.notifications] Email Service Error: Email connector not specified.
kibana-1  | [2024-10-21T09:32:56.021+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. Hostname/IP does not match certificate's altnames: Host: elasticsearch. is not in the cert's altnames: DNS:es01, IP Address:127.0.0.1, DNS:localhost
kibana-1  | [2024-10-21T09:52:37.689+00:00][ERROR][plugins.ruleRegistry] Error: Timeout: it took more than 1200000ms
kibana-1  | [2024-10-21T09:52:37.733+00:00][ERROR][plugins.ruleRegistry] Error: Failure during installation of common resources shared between all indices. Timeout: it took more than 1200000ms
kibana-1  | [2024-10-21T10:17:43.937+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 172.27.0.3:9200
kibana-1  | [2024-10-21T10:17:54.613+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. getaddrinfo EAI_AGAIN elasticsearch
kibana-1  | [2024-10-21T10:18:32.433+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 172.27.0.3:9200
kibana-1  | [2024-10-21T10:19:21.610+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. Hostname/IP does not match certificate's altnames: Host: elasticsearch. is not in the cert's altnames: DNS:es01, IP Address:127.0.0.1, DNS:localhost
elasticsearch-1  | {"@timestamp":"2024-10-21T09:30:11.518Z", "log.level": "INFO", "message":"JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=org.elasticsearch.preallocate, --enable-native-access=org.elasticsearch.nativeaccess, -Des.cgroups.hierarchy.override=/, -XX:ReplayDataFile=logs/replay_pid%p.log, -Des.distribution.type=docker, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-4808009127935411434, --add-modules=jdk.incubator.vector, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,level,pid,tags:filecount=32,filesize=64m, -Xms7864m, -Xmx7864m, -XX:MaxDirectMemorySize=4123000832, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, --add-modules=ALL-MODULE-PATH, -Djdk.module.main=org.elasticsearch.server]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
elasticsearch-1  | {"@timestamp":"2024-10-21T09:30:50.471Z", "log.level": "WARN", "message":"failed to resolve host [es01]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][cluster_coordination][T#1]","log.logger":"org.elasticsearch.discovery.SeedHostsResolver","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster","error.type":"java.net.UnknownHostException","error.message":"es01: Temporary failure in name resolution","error.stack_trace":"java.net.UnknownHostException: es01: Temporary failure in name resolution\n\tat java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)\n\tat java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Inet6AddressImpl.java:52)\n\tat java.base/java.net.InetAddress$PlatformResolver.lookupByName(InetAddress.java:1211)\n\tat java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1828)\n\tat java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:1139)\n\tat java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1818)\n\tat java.base/java.net.InetAddress.getAllByName(InetAddress.java:1688)\n\tat org.elasticsearch.server@8.13.2/org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:652)\n\tat org.elasticsearch.server@8.13.2/org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:594)\n\tat org.elasticsearch.server@8.13.2/org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:1130)\n\tat org.elasticsearch.server@8.13.2/org.elasticsearch.discovery.SeedHostsResolver.lambda$resolveHosts$0(SeedHostsResolver.java:92)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)\n\tat org.elasticsearch.server@8.13.2/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:917)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n"}
elasticsearch-1  | {"@timestamp":"2024-10-21T10:18:43.668Z", "log.level": "INFO", "message":"JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=org.elasticsearch.preallocate, --enable-native-access=org.elasticsearch.nativeaccess, -Des.cgroups.hierarchy.override=/, -XX:ReplayDataFile=logs/replay_pid%p.log, -Des.distribution.type=docker, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-14281786778791872215, --add-modules=jdk.incubator.vector, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,level,pid,tags:filecount=32,filesize=64m, -Xms7864m, -Xmx7864m, -XX:MaxDirectMemorySize=4123000832, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, --add-modules=ALL-MODULE-PATH, -Djdk.module.main=org.elasticsearch.server]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
elasticsearch-1  | {"@timestamp":"2024-10-21T10:19:19.488Z", "log.level": "WARN", "message":"failed to resolve host [es01]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][cluster_coordination][T#1]","log.logger":"org.elasticsearch.discovery.SeedHostsResolver","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster","error.type":"java.net.UnknownHostException","error.message":"es01: Temporary failure in name resolution","error.stack_trace":"java.net.UnknownHostException: es01: Temporary failure in name resolution\n\tat java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)\n\tat java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Inet6AddressImpl.java:52)\n\tat java.base/java.net.InetAddress$PlatformResolver.lookupByName(InetAddress.java:1211)\n\tat java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1828)\n\tat java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:1139)\n\tat java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1818)\n\tat java.base/java.net.InetAddress.getAllByName(InetAddress.java:1688)\n\tat org.elasticsearch.server@8.13.2/org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:652)\n\tat org.elasticsearch.server@8.13.2/org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:594)\n\tat org.elasticsearch.server@8.13.2/org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:1130)\n\tat org.elasticsearch.server@8.13.2/org.elasticsearch.discovery.SeedHostsResolver.lambda$resolveHosts$0(SeedHostsResolver.java:92)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)\n\tat org.elasticsearch.server@8.13.2/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:917)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n"}


Bonjour

Tu as modifié le script fourni par défault. Notamment es01 devient elasticsearch.
J'aurais préféré que tu:

  1. Démarre sans toucher la configuration et valide ainsi que la configuration fournie fonctionne bien chez toi
  2. Supprime les noeuds es02 et es03, redémarre et vérifie que tout fonctionne bien
  3. Renomme le noeud es01 en elasticsearch, redémarre et vérifie que tout fonctionne bien
  4. Ajoute la configuration logstash, redémarre et vérifie que tout fonctionne bien

Ca te permettrait d'ailleurs sans doute de diagnostiquer encore mieux tes erreurs.

Quelques erreurs notées ici et là:

          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\

Peut-être:

      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]

Ou encore:

    environment:
      - node.name=es01
      - cluster.initial_master_nodes=es01
      - discovery.seed_hosts=es01
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt

Ici

    image: docker.elastic.co/logstash/logstash:8.13.2

Devrait être:

    image: docker.elastic.co/logstash/logstash:${STACK_VERSION}

Il manque peut-être un volume pour logstash?

oui mais comme j'avais un elasticsearch de base c'est pour cela je voulais qu'il fonctionne sur ce lui ci et ce dernier porte le nom de elasticsearch et non es01

Fais progressivement les étapes que je t'ai indiquées. Quand une des étapes bloque, dis où tu en es, ce que tu as fait, et on regardera ce qui se passe.