Création de cluster sous docker

Bonjour, je rencontre un problème pour créer un cluster (3 noeuds) avec elasticsearch.
Je précise que j'utilise docker et je n'utilise pas les images officielles elastic pour réaliser cela.

Quand je lance ma pile de conteneur avec docker-compose, les conteneurs elasticsearch se lancent, peuvent se voir dans le réseau (j'ai effectué des tests avec curl depuis les conteneurs).
Mais le cluster ne se monte pas, chacun des noeud créé un cluster et se proclame master de ce cluster, de plus chaque cluster possède le même UUID.

Je me retrouve donc avec 3 cluster de un seul noeud au lieu de un de 3 noeuds.

Voici la configuration de mon elasticsearch.yml

cluster.name: elastic-cluster

node.name: ${SERVICE}

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

network.host: 0

http.port: ${ELASTICSEARCH_PORT}

discovery.zen.ping.unicast.hosts:
  - ${HOST1}
  - ${HOST2}

cluster.initial_master_nodes:
  - elastic
  - elastic-2
  - elastic-3

J'ai déjà testé :

  • le discovery.zen.ping.unicast.hosts sous forme de tableau ["host1","host2"]..
  • le discovery.seed_hosts en liste ET en tableau

Ensuite voilà la configuration de mon docker-compose

version: '2.2'
services:
  elastic:
    image: elastic-debian:test
    container_name: elastic
    mem_limit: 4000m
    mem_reservation: 4000m
    cpus: '2'
    ports:
      - ${ELASTICSEARCH_PORT}:${ELASTICSEARCH_PORT}
    volumes:
      - mydata-test:/var/lib/elasticsearch
      - elastic_log-test:/var/log/elasticsearch
    networks:
    - elk-test
    restart: unless-stopped
    environment:
      - SERVICE=elastic
      - ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT}
      - HOST1=elastic-2
	  - HOST2=elastic-3

  elastic-2:
    image: elastic-debian:test
    container_name: elastic-2
    mem_limit: 4000m
    mem_reservation: 4000m
    cpus: '2'
    volumes:
      - mydata2-test:/var/lib/elasticsearch
      - elastic_log2-test:/var/log/elasticsearch
    networks:
    - elk-test
    restart: unless-stopped
    environment:
      - SERVICE=elastic-2
      - ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT}
      - HOST1=elastic
	  - HOST2=elastic-3
      
  elastic-3:
    image: elastic-debian:test
    container_name: elastic-3
    mem_limit: 4000m
    mem_reservation: 4000m
    cpus: '2'
    volumes:
      - mydata3-test:/var/lib/elasticsearch
      - elastic_log3-test:/var/log/elasticsearch
    networks:
    - elk-test
    restart: unless-stopped
    environment:
      - SERVICE=elastic-3
      - ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT}
      - HOST1=elastic
      - HOST2=elastic-2

  kibana:
    image: kibana-debian:test
    container_name: kibana
    mem_limit: 2000m
    mem_reservation: 1000m
    cpus: '1'
    ports:
      - ${KIBANA_PORT}:${KIBANA_PORT}
    networks:
    - elk-test
    restart: unless-stopped
    environment:
      - KIBANA_PORT=${KIBANA_PORT}
      - KIBANA_PASSWORD=${KIBANA_PASSWORD}
      - HOST_URL=${HOST_URL}
      - ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT}
    
volumes:
  mydata-test:
  elastic_log-test:
  elastic_log2-test:
  elastic_log3-test:
  mydata2-test:
  mydata3-test:

networks:
  elk-test:
    driver: bridge

Le contenu de mon .env :

KIBANA_PASSWORD=password
HOST_URL=http://elk.home
KIBANA_PORT=5601
ELASTICSEARCH_PORT=9200

Voilà le résultat d'un curl depuis le conteneur elastic :

root@abce750895f3:/usr/share/elasticsearch# curl -X GET "http://elastic:9200/_cat/nodes?v&pretty"
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.16.4            4          94   0    0.10    0.05     0.14 dilm      *      elastic
root@abce750895f3:/usr/share/elasticsearch# curl -X GET "http://elastic-2:9200/_cat/nodes?v&pretty"
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.16.2            6          94   1    0.09    0.04     0.14 dilm      *      elastic-2
root@abce750895f3:/usr/share/elasticsearch# curl -X GET "http://elastic-3:9200/_cat/nodes?v&pretty"
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.16.3            6          94   1    0.15    0.06     0.15 dilm      *      elastic-3

Et pour finir voilà les logs que j'ai sur le conteneur elastic :

[2020-06-25T15:00:37,596][INFO ][o.e.t.TransportService   ] [elastic] publish_address {192.168.32.2:9300}, bound_addresses {0.0.0.0:9300}
[2020-06-25T15:00:38,622][INFO ][o.e.b.BootstrapChecks    ] [elastic] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-06-25T15:00:38,699][INFO ][o.e.c.c.Coordinator      ] [elastic] cluster UUID [kUP771ffRi6FNrtu069u4g]
[2020-06-25T15:00:40,360][WARN ][o.e.m.j.JvmGcMonitorService] [elastic] [gc][young][2][14] duration [1.1s], collections [1]/[2.1s], total [1.1s]/[4.1s], memory [129.6mb]->[119mb]/[3.9gb], all_pools {[young] [65.6mb]->[18.3mb]/[133.1mb]}{[survivor] [15.4mb]->[14.5mb]/[16.6mb]}{[old] [48.5mb]->[86.6mb]/[3.8gb]}
[2020-06-25T15:00:40,367][WARN ][o.e.m.j.JvmGcMonitorService] [elastic] [gc][2] overhead, spent [1.1s] collecting in the last [2.1s]
[2020-06-25T15:00:40,403][INFO ][o.e.c.s.MasterService    ] [elastic] elected-as-master ([1] nodes joined)[{elastic}{T-B42F0tSH25NhASqQQiZQ}{lo9vvcWXQT6CNKBg9AVFdw}{192.168.32.2}{192.168.32.2:9300}{dilm}{ml.machine_memory=16797630464, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 3, version: 37, delta: master node changed {previous [], current [{elastic}{T-B42F0tSH25NhASqQQiZQ}{lo9vvcWXQT6CNKBg9AVFdw}{192.168.32.2}{192.168.32.2:9300}{dilm}{ml.machine_memory=16797630464, xpack.installed=true, ml.max_open_jobs=20}]}
[2020-06-25T15:00:40,646][INFO ][o.e.c.s.ClusterApplierService] [elastic] master node changed {previous [], current [{elastic}{T-B42F0tSH25NhASqQQiZQ}{lo9vvcWXQT6CNKBg9AVFdw}{192.168.32.2}{192.168.32.2:9300}{dilm}{ml.machine_memory=16797630464, xpack.installed=true, ml.max_open_jobs=20}]}, term: 3, version: 37, reason: Publication{term=3, version=37}
[2020-06-25T15:00:40,866][INFO ][o.e.h.AbstractHttpServerTransport] [elastic] publish_address {192.168.32.2:9200}, bound_addresses {0.0.0.0:9200}
[2020-06-25T15:00:40,869][INFO ][o.e.n.Node               ] [elastic] started
[2020-06-25T15:00:41,787][INFO ][o.e.l.LicenseService     ] [elastic] license [6b57049c-72dc-47fd-a0b4-ec8a564af9ac] mode [basic] - valid
[2020-06-25T15:00:41,788][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [elastic] Active license is now [BASIC]; Security is disabled
[2020-06-25T15:00:41,841][INFO ][o.e.g.GatewayService     ] [elastic] recovered [4] indices into cluster_state
[2020-06-25T15:00:44,377][WARN ][o.e.m.j.JvmGcMonitorService] [elastic] [gc][6] overhead, spent [641ms] collecting in the last [1s]
[2020-06-25T15:00:44,487][INFO ][o.e.c.r.a.AllocationService] [elastic] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.security-7][0], [.kibana_1][0], [.kibana_task_manager_1][0]]]).

elastic-2 :

[2020-06-25T13:56:44,059][INFO ][o.e.t.TransportService   ] [elastic-2] publish_address {192.168.16.2:9300}, bound_addresses {0.0.0.0:9300}
[2020-06-25T13:56:45,115][INFO ][o.e.b.BootstrapChecks    ] [elastic-2] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-06-25T13:56:45,162][INFO ][o.e.c.c.Coordinator      ] [elastic-2] cluster UUID [kUP771ffRi6FNrtu069u4g]
[2020-06-25T13:56:46,704][INFO ][o.e.c.s.MasterService    ] [elastic-2] elected-as-master ([1] nodes joined)[{elastic-2}{T-B42F0tSH25NhASqQQiZQ}{VbX2MhanTHS24I2X1uQl9w}{192.168.16.2}{192.168.16.2:9300}{dilm}{ml.machine_memory=16797630464, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 2, version: 22, delta: master node changed {previous [], current [{elastic-2}{T-B42F0tSH25NhASqQQiZQ}{VbX2MhanTHS24I2X1uQl9w}{192.168.16.2}{192.168.16.2:9300}{dilm}{ml.machine_memory=16797630464, xpack.installed=true, ml.max_open_jobs=20}]}
[2020-06-25T13:56:46,711][INFO ][o.e.m.j.JvmGcMonitorService] [elastic-2] [gc][young][3][14] duration [893ms], collections [1]/[1s], total [893ms]/[3.3s], memory [149.2mb]->[119.2mb]/[3.9gb], all_pools {[young] [85.3mb]->[18mb]/[133.1mb]}{[survivor] [15.2mb]->[14.4mb]/[16.6mb]}{[old] [48.6mb]->[86.6mb]/[3.8gb]}
[2020-06-25T13:56:46,717][WARN ][o.e.m.j.JvmGcMonitorService] [elastic-2] [gc][3] overhead, spent [893ms] collecting in the last [1s]
[2020-06-25T13:56:47,166][INFO ][o.e.c.s.ClusterApplierService] [elastic-2] master node changed {previous [], current [{elastic-2}{T-B42F0tSH25NhASqQQiZQ}{VbX2MhanTHS24I2X1uQl9w}{192.168.16.2}{192.168.16.2:9300}{dilm}{ml.machine_memory=16797630464, xpack.installed=true, ml.max_open_jobs=20}]}, term: 2, version: 22, reason: Publication{term=2, version=22}
[2020-06-25T13:56:47,346][INFO ][o.e.h.AbstractHttpServerTransport] [elastic-2] publish_address {192.168.16.2:9200}, bound_addresses {0.0.0.0:9200}
[2020-06-25T13:56:47,349][INFO ][o.e.n.Node               ] [elastic-2] started
[2020-06-25T13:56:48,089][INFO ][o.e.l.LicenseService     ] [elastic-2] license [6b57049c-72dc-47fd-a0b4-ec8a564af9ac] mode [basic] - valid
[2020-06-25T13:56:48,091][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [elastic-2] Active license is now [BASIC]; Security is disabled
[2020-06-25T13:56:48,121][INFO ][o.e.g.GatewayService     ] [elastic-2] recovered [1] indices into cluster_state
[2020-06-25T13:56:49,966][INFO ][o.e.c.r.a.AllocationService] [elastic-2] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.security-7][0]]]).

elastic-3

[2020-06-25T15:00:37,624][INFO ][o.e.t.TransportService   ] [elastic-3] publish_address {192.168.32.4:9300}, bound_addresses {0.0.0.0:9300}
[2020-06-25T15:00:38,662][INFO ][o.e.b.BootstrapChecks    ] [elastic-3] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-06-25T15:00:38,718][INFO ][o.e.c.c.Coordinator      ] [elastic-3] cluster UUID [kUP771ffRi6FNrtu069u4g]
[2020-06-25T15:00:39,126][INFO ][o.e.c.s.MasterService    ] [elastic-3] elected-as-master ([1] nodes joined)[{elastic-3}{T-B42F0tSH25NhASqQQiZQ}{l7XNbU3KTQysNBR2ymDMAQ}{192.168.32.4}{192.168.32.4:9300}{dilm}{ml.machine_memory=16797630464, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 3, version: 27, delta: master node changed {previous [], current [{elastic-3}{T-B42F0tSH25NhASqQQiZQ}{l7XNbU3KTQysNBR2ymDMAQ}{192.168.32.4}{192.168.32.4:9300}{dilm}{ml.machine_memory=16797630464, xpack.installed=true, ml.max_open_jobs=20}]}
[2020-06-25T15:00:39,927][INFO ][o.e.m.j.JvmGcMonitorService] [elastic-3] [gc][young][2][14] duration [708ms], collections [1]/[1.7s], total [708ms]/[3.2s], memory [120.7mb]->[113.1mb]/[3.9gb], all_pools {[young] [60.5mb]->[17.4mb]/[133.1mb]}{[survivor] [11.5mb]->[9.7mb]/[16.6mb]}{[old] [48.7mb]->[86.8mb]/[3.8gb]}
[2020-06-25T15:00:39,933][INFO ][o.e.m.j.JvmGcMonitorService] [elastic-3] [gc][2] overhead, spent [708ms] collecting in the last [1.7s]
[2020-06-25T15:00:40,138][INFO ][o.e.c.s.ClusterApplierService] [elastic-3] master node changed {previous [], current [{elastic-3}{T-B42F0tSH25NhASqQQiZQ}{l7XNbU3KTQysNBR2ymDMAQ}{192.168.32.4}{192.168.32.4:9300}{dilm}{ml.machine_memory=16797630464, xpack.installed=true, ml.max_open_jobs=20}]}, term: 3, version: 27, reason: Publication{term=3, version=27}
[2020-06-25T15:00:40,389][INFO ][o.e.h.AbstractHttpServerTransport] [elastic-3] publish_address {192.168.32.4:9200}, bound_addresses {0.0.0.0:9200}
[2020-06-25T15:00:40,393][INFO ][o.e.n.Node               ] [elastic-3] started
[2020-06-25T15:00:41,269][INFO ][o.e.l.LicenseService     ] [elastic-3] license [6b57049c-72dc-47fd-a0b4-ec8a564af9ac] mode [basic] - valid
[2020-06-25T15:00:41,271][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [elastic-3] Active license is now [BASIC]; Security is disabled
[2020-06-25T15:00:41,303][INFO ][o.e.g.GatewayService     ] [elastic-3] recovered [1] indices into cluster_state
[2020-06-25T15:00:43,459][INFO ][o.e.c.r.a.AllocationService] [elastic-3] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.security-7][0]]]).

Voilà, normalement, j'ai joint tous les éléments nécessaire comprenant config ainsi que logs.
Je remercie d'avance toute personne pouvant m'éclairer pour enfin pouvoir faire ce cluster, si il manque des infos dites le moi et je les ajouterai le plus rapidement possible.

Merci.

Cordialement,
Benjamin

-->

Bonjour Benjamin,

Si elasticsearch.yml a vraiement les settings de discovery avec plusieurs initial_master_nodes, il n'est pas possible qu'un seul noeuds fasse une election d'un master seul donc le plus probable est que elasticsearch.yml ne soit pas avec les bonnes valeurs ou qu'il y ai un problemes avec la substitution par variable d'environment dans le containers.
A noter que a chaque teste il faudrait supprimer le contenu des volumes data pour repartir a zéro...

Avec les memes parametres en 7.8.0, si je démare qu'un des 3 containers, j'ai :

es0              | {"type": "server", "timestamp": "2020-06-25T15:56:09,248Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "cluster0", "node.name": "es0", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [es0, es1, es2] to bootstrap a cluster: have discovered [{es0}{NpwAJ23iQvuDopQlW1Zuzw}{UGGBxLecTcaQvksnuPK7gA}{172.18.0.2}{172.18.0.2:9300}{dimrt}{xpack.installed=true, transform.node=true}]; discovery will continue using [] from hosts providers and [{es0}{NpwAJ23iQvuDopQlW1Zuzw}{UGGBxLecTcaQvksnuPK7gA}{172.18.0.2}{172.18.0.2:9300}{dimrt}{xpack.installed=true, transform.node=true}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }

Si elasticsearch.yml vient de l'image, peut-etre il faut vérifier dans le container le fichier elasticsearch.yml directement ? Attention yaml l'indentation est importante.
Aussi dans un premier temps tester sans variable d'environement en passant elasticsearch.yml par volume pour vérifier si ca marche en lancant un seul container qui devrait montrer l'erreur (docker-compose up -d service && docker-compose logs -f service | grep master)

Aussi, je conseillerai de commencer avec les images officielles comme point de départ (elle peut etre personnalisée, ca permet de réutiliser les scripts entrypoint par exemple et aussi l'utilisation sera plus facilement liée avec la documentation des images)

Bonjour Julien,

Il n'y a pas de problème avec la substitution de variable mais j'ai dans le doute créer 3 fichiers de config différent.

elastic 1 :

# ======================== Elasticsearch Configuration =========================

node.name: node-1

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

network.host: 0

http.port: 9200

discovery.seed_hosts: ["node-1", "node-2","node-3"]

cluster.initial_master_nodes: ["node-1","node-2","node-3"]

elastic 2 :

# ======================== Elasticsearch Configuration =========================

node.name: node-2

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

network.host: 0

http.port: 9200

discovery.seed_hosts: ["node-1", "node-2","node-3"]

cluster.initial_master_nodes: ["node-1","node-2","node-3"]

elastic 3 :

# ======================== Elasticsearch Configuration =========================

node.name: node-3

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

network.host: 0

http.port: 9200

discovery.seed_hosts: ["node-1", "node-2","node-3"]

cluster.initial_master_nodes: ["node-1","node-2","node-3"]

J'ai juste changé le node.name de chaque noeud.

Maintenant la nouvelle configuration du docker-compose.yml :

version: '2.2'
services:
  elastic:
    image: elastic-debian:7.6.1-c1
    container_name: elastic
    mem_limit: 4000m
    mem_reservation: 4000m
    cpus: '2'
    ports:
      - 9200:9200
    volumes:
      - mydata-test:/var/lib/elasticsearch
      - elastic_log-test:/var/log/elasticsearch
    networks:
    - elk-test
    restart: unless-stopped

  elastic2:
    image: elastic-debian:7.6.1-c2
    container_name: elastic2
    mem_limit: 4000m
    mem_reservation: 4000m
    cpus: '2'
    volumes:
      - mydata-test2:/var/lib/elasticsearch
      - elastic_log-test2:/var/log/elasticsearch
    networks:
      - elk-test
    restart: unless-stopped

  elastic3:
    image: elastic-debian:7.6.1-c3
    container_name: elastic3
    mem_limit: 4000m
    mem_reservation: 4000m
    cpus: '2'
    volumes:
      - mydata-test3:/var/lib/elasticsearch
      - elastic_log-test3:/var/log/elasticsearch
    networks:
      - elk-test
    restart: unless-stopped

  kibana:
    image: kibana-debian:7.6.1-c
    container_name: kibana
    mem_limit: 2000m
    mem_reservation: 1000m
    cpus: '1'
    ports:
      - 5601:5601
    networks:
    - elk-test
    restart: unless-stopped

volumes:
  mydata-test:
  elastic_log-test:
  mydata-test2:
  elastic_log-test2:
  mydata-test3:
  elastic_log-test3:

networks:
  elk-test:
    driver: bridge

Je précise que je supprime bien tous les volumes existant entre chaque tentative et que l'indentation du YAML n'est pas un problème. Avec cette configuration là même problème que précédemment, 3 cluster de 1 seul noeud.

J'ai testé de démarrer un seul conteneur elasticsearch avec la configuration cité plus haut et voilà le résultat des logs :

.
[2020-07-22T13:20:41,051][INFO ][o.e.e.NodeEnvironment    ] [node-1] using [1] data paths, mounts [[/var/lib/elasticsearch (/dev/mapper/datavg-lv_docker)]], net usable_space [71.3gb], net total_space [78.2gb], types [ext4]
[2020-07-22T13:20:41,059][INFO ][o.e.e.NodeEnvironment    ] [node-1] heap size [3.9gb], compressed ordinary object pointers [true]
[2020-07-22T13:20:41,446][INFO ][o.e.n.Node               ] [node-1] node name [node-1], node ID [bavlb_3sQFq1AVdf3jJXbg], cluster name [elasticsearch]
[2020-07-22T13:20:41,446][INFO ][o.e.n.Node               ] [node-1] version[7.6.1], pid[1], build[default/deb/aa751e09be0a5072e8570670309b1f12348f023b/2020-02-29T00:15:25.529771Z], OS[Linux/4.19.0-8-amd64/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/13.0.2/13.0.2+8]
[2020-07-22T13:20:41,447][INFO ][o.e.n.Node               ] [node-1] JVM home [/usr/share/elasticsearch/jdk]
[2020-07-22T13:20:41,448][INFO ][o.e.n.Node               ] [node-1] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=COMPAT, -Xms4g, -Xmx4g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-2848283616810815521, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:MaxDirectMemorySize=2147483648, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=deb, -Des.bundled_jdk=true]
[2020-07-22T13:20:45,346][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [aggs-matrix-stats]
[2020-07-22T13:20:45,347][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [analysis-common]
[2020-07-22T13:20:45,347][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [flattened]
[2020-07-22T13:20:45,347][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [frozen-indices]
[2020-07-22T13:20:45,347][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [ingest-common]
[2020-07-22T13:20:45,348][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [ingest-geoip]
[2020-07-22T13:20:45,348][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [ingest-user-agent]
[2020-07-22T13:20:45,348][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [lang-expression]
[2020-07-22T13:20:45,348][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [lang-mustache]
[2020-07-22T13:20:45,349][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [lang-painless]
[2020-07-22T13:20:45,349][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [mapper-extras]
[2020-07-22T13:20:45,349][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [parent-join]
[2020-07-22T13:20:45,349][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [percolator]
[2020-07-22T13:20:45,349][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [rank-eval]
[2020-07-22T13:20:45,350][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [reindex]
[2020-07-22T13:20:45,350][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [repository-url]
[2020-07-22T13:20:45,350][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [search-business-rules]
[2020-07-22T13:20:45,350][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [spatial]
[2020-07-22T13:20:45,351][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [systemd]
[2020-07-22T13:20:45,351][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [transform]
[2020-07-22T13:20:45,351][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [transport-netty4]
[2020-07-22T13:20:45,351][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [vectors]
[2020-07-22T13:20:45,351][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-analytics]
[2020-07-22T13:20:45,352][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-ccr]
[2020-07-22T13:20:45,352][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-core]
[2020-07-22T13:20:45,352][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-deprecation]
[2020-07-22T13:20:45,352][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-enrich]
[2020-07-22T13:20:45,353][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-graph]
[2020-07-22T13:20:45,353][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-ilm]
[2020-07-22T13:20:45,353][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-logstash]
[2020-07-22T13:20:45,353][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-ml]
[2020-07-22T13:20:45,354][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-monitoring]
[2020-07-22T13:20:45,354][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-rollup]
[2020-07-22T13:20:45,354][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-security]
[2020-07-22T13:20:45,354][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-sql]
[2020-07-22T13:20:45,354][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-voting-only-node]
[2020-07-22T13:20:45,355][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-watcher]
[2020-07-22T13:20:45,355][INFO ][o.e.p.PluginsService     ] [node-1] no plugins loaded
[2020-07-22T13:20:50,816][INFO ][o.e.x.s.a.s.FileRolesStore] [node-1] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2020-07-22T13:20:51,670][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node-1] [controller/95] [Main.cc@110] controller (64 bit): Version 7.6.1 (Build 6eb6e036390036) Copyright (c) 2020 Elasticsearch BV
[2020-07-22T13:20:52,531][DEBUG][o.e.a.ActionModule       ] [node-1] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2020-07-22T13:20:52,686][INFO ][o.e.d.DiscoveryModule    ] [node-1] using discovery type [zen] and seed hosts providers [settings]
[2020-07-22T13:20:53,879][INFO ][o.e.n.Node               ] [node-1] initialized
[2020-07-22T13:20:53,880][INFO ][o.e.n.Node               ] [node-1] starting ...
[2020-07-22T13:20:54,019][INFO ][o.e.t.TransportService   ] [node-1] publish_address {172.20.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2020-07-22T13:20:54,410][INFO ][o.e.b.BootstrapChecks    ] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-07-22T13:20:54,462][INFO ][o.e.c.c.Coordinator      ] [node-1] cluster UUID [Nsf5LJqhRIqEs1pnI7Sgkw]
[2020-07-22T13:20:54,642][INFO ][o.e.c.s.MasterService    ] [node-1] elected-as-master ([1] nodes joined)[{node-1}{bavlb_3sQFq1AVdf3jJXbg}{II5fPCjiRIC4dnxnWWb5Qw}{172.20.0.2}{172.20.0.2:9300}{dilm}{ml.machine_memory=16821686272, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 3, version: 27, delta: master node changed {previous [], current [{node-1}{bavlb_3sQFq1AVdf3jJXbg}{II5fPCjiRIC4dnxnWWb5Qw}{172.20.0.2}{172.20.0.2:9300}{dilm}{ml.machine_memory=16821686272, xpack.installed=true, ml.max_open_jobs=20}]}
[2020-07-22T13:20:54,896][INFO ][o.e.c.s.ClusterApplierService] [node-1] master node changed {previous [], current [{node-1}{bavlb_3sQFq1AVdf3jJXbg}{II5fPCjiRIC4dnxnWWb5Qw}{172.20.0.2}{172.20.0.2:9300}{dilm}{ml.machine_memory=16821686272, xpack.installed=true, ml.max_open_jobs=20}]}, term: 3, version: 27, reason: Publication{term=3, version=27}
[2020-07-22T13:20:54,950][INFO ][o.e.h.AbstractHttpServerTransport] [node-1] publish_address {172.20.0.2:9200}, bound_addresses {0.0.0.0:9200}
[2020-07-22T13:20:54,951][INFO ][o.e.n.Node               ] [node-1] started
[2020-07-22T13:20:55,249][INFO ][o.e.l.LicenseService     ] [node-1] license [1997c68b-8254-47c6-91a8-6e813663863b] mode [basic] - valid
[2020-07-22T13:20:55,250][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [node-1] Active license is now [BASIC]; Security is disabled
[2020-07-22T13:20:55,260][INFO ][o.e.g.GatewayService     ] [node-1] recovered [1] indices into cluster_state
[2020-07-22T13:20:55,960][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.security-7][0]]]).

Il s'est auto proclamé master alors qu'il y a plusieurs noeuds dans le cluster.initial_master_nodes.

De plus j'ai testé avec les images officielles, aucun problème, cependant pour le projet que je suis en train de réaliser je peux pas les utiliser c'est pour cela que je dois passer par cette image "custom".

Avez-vous une idée sur l'origine du problème ? j'ai l'impression d'avoir testé énormément de configuration différentes mais sans aucune différence sur les résultat.

Je vous remercie d'avance

Je m'excuse sincèrement pour le temps de réponse à votre message dû à mes indisponibilités.

Si tu ne précises pas explicitement les rôles des nœuds, ils sont par défaut : Master-elligible / data/ ingest/ etc. Des lors que tu lances ton cluster, 1 et seul parmi les master elligible est élu master.

Donc c'est tout à fait normal que sur un ensemble de noeud elligible : **cluster.initial_master_node ** 1 node est élu master.

D'accord je vois ... mais comment se fait-il qu'il soit seul ?
Sachant que je lance les 3 en même temps, qu'ils se parlent sur le réseau (avec le nom du container ou bien ip) et surtout qu'ils aient tous les trois le même le meme cluster UUID.

PS : J'ai effectué des modification sur le fichier elasticsearch.yml, je me suis rendu compte que j'avais oublié le cluster.name et que les conteneurs n'arrivaient pas à se joindre à cause du nom "node-1" qui n'était pas celui du conteneur.

# ======================== Elasticsearch Configuration =========================

cluster.name: elastic-cluster

node.name: elastic

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

network.host: 0

http.port: 9200

discovery.seed_hosts: ["elastic", "elastic2","elastic3"]

cluster.initial_master_nodes: ["elastic","elastic2","elastic3"]

C'est tout à fait normal. tu auras toujours 1 master elu dans ton cluster. s'il tombe KO - un autre nœud master elligible sera elu master et prendra les commandes du cluster.

Par contre, je pense que les valeurs du discovery.seed_hosts devraient être des ip ou des adresses pourtant être résolu par DNS.

C'est actuellement le cas. elastic, elastic2 et elastic3 sont des adresses résolue par DNS. Et comme je travaille sur Docker je ne peux pas passer directement par les IP.

1 Like

J'ai quand même testé en passant directement par les IP au cas où pour pouvoir éliminer cette possibilité.
Config de elasticsearch :

# ======================== Elasticsearch Configuration =========================

cluster.name: elastic-cluster

node.name: elastic

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

network.host: 0

http.port: 9200

discovery.seed_hosts: ["192.168.10.10", "192.168.10.11","192.168.10.12"]

cluster.initial_master_nodes: ["elastic","elastic2","elastic3"]

(Changement du node.name selon le noeud)

nouvelle config du docker-compose :

version: '2.2'
services:
  elastic:
    image: elastic-debian:7.6.1-c1
    container_name: elastic
    mem_limit: 4000m
    mem_reservation: 4000m
    cpus: '2'
    ports: 
      - 9200:9200
    volumes: 
      - mydata-test:/var/lib/elasticsearch
      - elastic_log-test:/var/log/elasticsearch
    networks:
      elk-test:
        ipv4_address: 192.168.10.10
    restart: unless-stopped
  
  elastic2:
    image: elastic-debian:7.6.1-c2
    container_name: elastic2
    mem_limit: 4000m
    mem_reservation: 4000m
    cpus: '2'
    volumes: 
      - mydata-test2:/var/lib/elasticsearch
      - elastic_log-test2:/var/log/elasticsearch
    networks:
      elk-test:
        ipv4_address: 192.168.10.11
    restart: unless-stopped

  elastic3:
    image: elastic-debian:7.6.1-c3
    container_name: elastic3
    mem_limit: 4000m
    mem_reservation: 4000m
    cpus: '2'
    volumes:
      - mydata-test3:/var/lib/elasticsearch
      - elastic_log-test3:/var/log/elasticsearch
    networks:
      elk-test:
        ipv4_address: 192.168.10.12
    restart: unless-stopped

  kibana:
    image: kibana-debian:7.6.1-c
    container_name: kibana
    mem_limit: 2000m
    mem_reservation: 1000m
    cpus: '1'
    ports:
      - 5601:5601
    networks:
      elk-test:
        ipv4_address: 192.168.10.13
    restart: unless-stopped

volumes:
  mydata-test:
  elastic_log-test:
  mydata-test2:
  elastic_log-test2:        
  mydata-test3:
  elastic_log-test3:        

networks:
  elk-test:
    driver: bridge
    ipam:
      config:
        - subnet: 192.168.10.0/24
          gateway: 192.168.10.1

Le résultat est toujours le même, voici la réponse des cluster :

root@2ff59c42a2bc:/usr/share/elasticsearch# curl --noproxy "*" -k -X GET "http://192.168.10.10:9200/_cluster/health?pretty"
{
  "cluster_name" : "elastic-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 5,
  "active_shards" : 5,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
root@2ff59c42a2bc:/usr/share/elasticsearch# curl --noproxy "*" -k -X GET "http://192.168.10.11:9200/_cluster/health?pretty"
{
  "cluster_name" : "elastic-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 1,
  "active_shards" : 1,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
root@2ff59c42a2bc:/usr/share/elasticsearch# curl --noproxy "*" -k -X GET "http://192.168.10.12:9200/_cluster/health?pretty"
{
  "cluster_name" : "elastic-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 1,
  "active_shards" : 1,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

A ce moment là j'étais connecté sur le conteneur "elastic", j'ai donc bien accès aux deux autres, mais le problème persiste, même en passant directement par les IP, le problème n'est donc pas la résolution.

Bonjour,

Je relance ce forum car à ce jour je n'ai toujours pas de solution à cette problématique.
Quelqu'un aurait-il une piste ?

Après plusieurs jours de recherche j'ai ENFIN trouvé le problème :
Je "settais" le mot de passe des built-in users dans le Dockerfile, j'ai donc enlevé cette partie et le cluster s'est donc monté correctement.

Voici la partie que j'ai enlevé de mon dockerfile et qui me permettait de définir les mot de passe des built-in users

RUN printf "password" | ./../usr/share/elasticsearch/bin/elasticsearch-keystore add "bootstrap.password" \
    && echo 'xpack.security.enabled: true' >> elasticsearch/elasticsearch.yml \
    && service elasticsearch start \
    && while ! echo exit | curl --noproxy "*" -k -u elastic:password -X POST "http://localhost:9200/_security/user/kibana/_password?pretty" -H 'Content-Type: application/json' -d'{ "password" : "'"${KIBANA_PASSWORD}"'"}'; do sleep 10; done \
    && curl --noproxy "*" -k -u elastic:password -X POST "http://localhost:9200/_security/user/logstash_system/_password?pretty" -H 'Content-Type: application/json' -d'{ "password" : "'"${LOGSTASH_PASSWORD}"'"}' \
    && curl --noproxy "*" -k -u elastic:password -X POST "http://localhost:9200/_security/user/beats_system/_password?pretty" -H 'Content-Type: application/json' -d'{ "password" : "'"${BEATSSYSTEM_PASSWORD}"'"}' \
    && curl --noproxy "*" -k -u elastic:password -X POST "http://localhost:9200/_security/user/apm_system/_password?pretty" -H 'Content-Type: application/json' -d'{ "password" : "'"${APM_PASSWORD}"'"}' \
    && curl --noproxy "*" -k -u elastic:password -X POST "http://localhost:9200/_security/user/remote_monitoring_user/_password?pretty" -H 'Content-Type: application/json' -d'{ "password" : "'"${MONITORING_PASSWORD}"'"}' \
    && curl --noproxy "*" -k -u elastic:password -X POST "http://localhost:9200/_security/user/elastic/_password?pretty" -H 'Content-Type: application/json' -d'{ "password" : "'"${ELASTIC_PASSWORD}"'"}' \ 
    && service elasticsearch stop \

Mais cela amène un nouveau problème, je veux remettre le SSL sur elasticsearch/kibana, mais comme je n'ai pas pu définir les mot de passes de built-in users impossible de créer le cluster, le message d'erreur est le suivant :

[2020-08-06T14:01:49,493][INFO ][o.e.x.m.e.l.LocalExporter] [elastic] waiting for elected master node [{elastic3}{hvvNgzfKQhSCnR5el4S-Ww}{clsRl6YbTyixwanUXJCnVA}{192.168.48.4}{192.168.48.4:9300}{dilm}{ml.machine_memory=16821686272, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)

voici la config de elasticsearch (je précise que la config marche en temps normal,
temps normal = en single node)

# ======================== Elasticsearch Configuration =========================
http.port: 9200

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

transport.tcp.port: 9300

network.host: 0
network.publish_host: 0
network.bind_host: 0

cluster.name: elastic-cluster
cluster.initial_master_nodes: 
  - elastic
  - elastic2
  - elastic3

node.name: ${node-name}
node.master: true
node.data: true
node.ingest: true

discovery.seed_hosts: 
  - elastic
  - elastic2
  - elastic3
discovery.initial_state_timeout: 5m
discovery.zen.minimum_master_nodes: 2

gateway.recover_after_nodes: 2
gateway.expected_nodes: 3

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/domain.p12
xpack.security.transport.ssl.truststore.path: certs/domain.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/http.p12
xpack.security.http.ssl.truststore.path: certs/http.p12

Comment puis-je remettre le SSL sans set les mot de passes de built-in users dans le Dockerfile ?

Bonjour, je reviens avec un peu plus d'informations et de tests effectués.

Bon il s'avère que si je créer les mots de passes des built-in user dans le dockerfile (peu importe la façon j'en ai testé plusieurs), le cluster ne pourra pas se créer. Cela va créer 3 cluster de 1 noeud et nous revenons à notre point de départ.

Mais si on ne touche pas a x-pack dans le dockerfile le cluster va se monter correctement mais il y aura le message d'erreur suivant UNIQUEMENT sur elastic2 et elastic3 :

[elastic2] waiting for elected master node [{elastic}{jqenPksFQIGG3ROUc6g6Gg}{QSGkeWHbQ0S1EWqJRxtSsg}{172.19.0.2}{172.19.0.2:9300}{dilm}{ml.machine_memory=16821686272, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)

Et impossible de faire appel à l'API car je n'ai pas de credentials vu qu'ils ne sont pas créés.
Retour du Curl :

curl --noproxy "*" -k -X GET "https://localhost:9200/_cat/nodes?v&pretty"
{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "missing authentication credentials for REST request [/_cat/nodes?v&pretty]",
        "header" : {
          "WWW-Authenticate" : [
            "Bearer realm=\"security\"",
            "ApiKey",
            "Basic realm=\"security\" charset=\"UTF-8\""
          ]
        }
      }
    ],
    "type" : "security_exception",
    "reason" : "missing authentication credentials for REST request [/_cat/nodes?v&pretty]",
    "header" : {
      "WWW-Authenticate" : [
        "Bearer realm=\"security\"",
        "ApiKey",
        "Basic realm=\"security\" charset=\"UTF-8\""
      ]
    }
  },
  "status" : 401
}

Les deux questions sont donc les suivantes :
1 - Est-ce normal que si les mot de passe des built-in user sont initialisé dans l'image Docker il est impossible de faire un cluster ?
2 - Comment puis-je faire pour set ces mot de passe ET monter mon cluster le tout en SSL ? Sachant que je dois le faire dans l'image Docker et ne pas passer par un entrypoint (ou alors il y a un autre moyen mais je ne vois pas comment )

Je peux fournir les fichiers de configurations/ logs si il en manque.
Et je remercie d'avance toute personne qui pourra m'aider sur ce problème.

Benjamin

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.