Problems with bootstrapping my 2-node cluster. Master node not found

Hi all,

I googled and searched these forums extenisvely, but unfortunately still no luck with this.

I try to bootstrap a new cluster, based on Docker and ElasticSearch 7.5.0 . I keep getting the error: "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes"

I can reach both servers from each other's side through curl and telnet so there are no port / firewall problems. I am quite convinced that the instances are talking to each other, because when I made a typo in the cluster name I got different errors.

Any help would be much appreciated!

This is my configuration file:

version: '2.2'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
    container_name: es01
    environment:
      - node.name=hostname01.ournetwork.internal
      - discovery.seed_hosts=hostname01.ournetwork.internal:9301hostname01.ournetwork.internal:9302,hostname02.ournetwork.internal:9300
      - cluster.initial_master_nodes=hostname01.ournetwork.internal,hostname02.ournetwork.internal
      - cluster.name=my-elasticsearch-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms5g -Xmx5g"
      - "node.master=true"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata01:/data/elasticsearch
    networks:
      - esnet
    ports:
      - 9200:9200
      - 9300:9300

volumes:
  esdata01:
    driver: local

networks:
  esnet:

on the other machine i have the same configuration file; the only differences are the hostnames and ports

ports:
  - 9201:9200
  - 9301:9300

and

ports:
  - 9202:9200
  - 9302:9300

Hi @paul4,

this bit looks wrong (hostname01 repeated twice):

To dig further, we need to see the full log files from the nodes when starting.

Hi @paul4,

you might also want to try to follow the guide here.

Hi @HenningAndersen,

Thanks for your swift reply! I did follow that guide, it was running with 2 containers on one server but the problem seems to be introduced by introducing the second server.

What I tried was to put 2 nodes on one server and one node on the other. But as you already told me, I seem to do something wrong with hostnames or ports.

I tried using the names as in the example, es01, es02 and es03 and only putting the server common names and ports in the seed hosts. But still no luck.

I'll paste the config file of the other server below, as well as the whole startup process and error that comes up. Thanks for your help, much appreciated!

 sudo docker-compose up
Starting es01 ...
Starting es01 ... done
Attaching to es01
es01    | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:41,209Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "using [1] data paths, mounts [[/ (overlay)]], net usable_space [997.7gb], net total_space [999.5gb], types [overlay]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:41,213Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "heap size [4.9gb], compressed ordinary object pointers [true]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:41,215Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "node name [hostname01.ournetwork.internal], node ID [YFBshE4wTGGTYJKrb9y49g], cluster name [my-elasticsearch-cluster]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:41,215Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "version[7.5.0], pid[1], build[default/docker/e9ccaed468e2fac2275a3761849cbee64b39519f/2019-11-26T01:06:52.518245Z], OS[Linux/4.15.0-74-generic/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/13.0.1/13.0.1+9]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:41,215Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "JVM home [/usr/share/elasticsearch/jdk]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:41,215Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=COMPAT, -Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-2686082342240465008, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms5g, -Xmx5g, -XX:MaxDirectMemorySize=2684354560, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,785Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [aggs-matrix-stats]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,785Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [analysis-common]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,785Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [flattened]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,786Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [frozen-indices]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,786Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [ingest-common]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,786Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [ingest-geoip]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,786Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [ingest-user-agent]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,786Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [lang-expression]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,787Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [lang-mustache]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,787Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [lang-painless]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,787Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [mapper-extras]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,787Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [parent-join]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,787Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [percolator]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,787Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [rank-eval]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,787Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [reindex]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,787Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [repository-url]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,788Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [search-business-rules]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,788Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [spatial]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,788Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [transform]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,788Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [transport-netty4]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,788Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [vectors]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,788Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-analytics]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,789Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-ccr]" }

(commences in next post)

es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,789Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-core]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,789Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-deprecation]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,789Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-enrich]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,789Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-graph]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,789Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-ilm]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,789Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-logstash]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,790Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-ml]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,790Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-monitoring]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,790Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-rollup]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,790Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-security]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,790Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-sql]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,790Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-voting-only-node]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,790Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "loaded module [x-pack-watcher]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:42,791Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "no plugins loaded" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:45,687Z", "level": "INFO", "component": "o.e.x.s.a.s.FileRolesStore", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:46,349Z", "level": "INFO", "component": "o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "[controller/110] [Main.cc@110] controller (64 bit): Version 7.5.0 (Build 17d1c724ca38a1) Copyright (c) 2019 Elasticsearch BV" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:46,799Z", "level": "DEBUG", "component": "o.e.a.ActionModule", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "Using REST wrapper from plugin org.elasticsearch.xpack.security.Security" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:46,915Z", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "using discovery type [zen] and seed hosts providers [settings]" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:47,693Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "initialized" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:47,694Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "starting ..." }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:47,843Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "publish_address {172.19.0.2:9300}, bound_addresses {0.0.0.0:9300}" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:47,943Z", "level": "INFO", "component": "o.e.b.BootstrapChecks", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "bound or publishing to a non-loopback address, enforcing bootstrap checks" }
es01    | {"type": "server", "timestamp": "2020-01-24T19:46:57,957Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "my-elasticsearch-cluster", "node.name": "hostname01.ournetwork.internal", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [hostname01.ournetwork.internal, hostname02.ournetwork.internal] to bootstrap a cluster: have discovered [{hostname01.ournetwork.internal}{YFBshE4wTGGTYJKrb9y49g}{Tk2mlrJNTUur6i4-tOJoUw}{172.19.0.2}{172.19.0.2:9300}{dilm}{ml.machine_memory=33730560000, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [10.242.52.24:9301, 10.242.52.24:9302, 10.242.52.210:9300] from hosts providers and [{hostname01.ournetwork.internal}{YFBshE4wTGGTYJKrb9y49g}{Tk2mlrJNTUur6i4-tOJoUw}{172.19.0.2}{172.19.0.2:9300}{dilm}{ml.machine_memory=33730560000, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }

This is the config file of the other server

version: '2.2'
services:
  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
    container_name: es02
    environment:
      - node.name=zb-els-02.digi.intern
      - discovery.seed_hosts=zb-els-01.digi.intern:9300,zb-els-02.digi.intern:9301,zb-els-02.digi.intern:9302
      - cluster.initial_master_nodes=zb-els-01.digi.intern,zb-els-02.digi.intern
      - cluster.name=induno-cluster-2
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms5g -Xmx5g"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata02:/data/elasticsearch/02
    networks:
      - esnet
    ports:
      - 9201:9200
      - 9301:9300

  es03:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
    container_name: es03
    environment:
      - node.name=zb-els-02.digi.intern
      - discovery.seed_hosts=zb-els-01.digi.intern:9300,zb-els-02.digi.intern:9301,zb-els-02.digi.intern:9302
      - cluster.initial_master_nodes=zb-els-01.digi.intern,zb-els-02.digi.intern
      - cluster.name=induno-cluster-2
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms5g -Xmx5g"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata03:/data/elasticsearch/03
    networks:
      - esnet
    ports:
      - 9202:9200
      - 9302:9300

volumes:
  esdata02:
    driver: local
  esdata03:
    driver: local

networks:
  esnet:

I think this is the case that we recently discovered to have poor logging for (addressed in #51304). This node is reporting that it cannot discover the other node:

have discovered [{hostname01.ournetwork.internal}{YFBshE4wTGGTYJKrb9y49g}{Tk2mlrJNTUur6i4-tOJoUw}{172.19.0.2}{172.19.0.2:9300}{dilm}{ml.machine_memory=33730560000, xpack.installed=true, ml.max_open_jobs=20}]

It's also reporting that it's using completely different addresses for discovery:

discovery will continue using [10.242.52.24:9301, 10.242.52.24:9302, 10.242.52.210:9300]

I suspect it is able to contact the other node on a 10.242.52.x address but cannot connect to its proper address on the 172.19.x.x network.

Thanks @DavidTurner for analyzing the log files! That will very likely be the problem you addressed here.

The 10.242.x.x addresses are the actual vps addresses. I think the 172.19.x.x addresses are the docker assigned addresses so maybe it has something to do with the ports binding within docker.

I thought the ports 9301:9300 etc would take care of this but apparently not.

I know this may not be the forum for this... but do you know how to fix this? I use the standard elasticsearch docker image and start it with 'docker-compose up' which doesn't allow parameters -p or --publish-all=true

When i start with 'docker-compose-up -d' it doesn't give an error immediately but still i get an error about the master node

curl 127.0.0.1:9200/_cluster/health
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}

Would I be better off joining the servers with a docker swarm configuration, avoiding these problems?

I'm guessing you are using a bridge network whose docs say:

Bridge networks apply to containers running on the same Docker daemon host. For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network?

Maybe you are looking for the host network type instead, or indeed as suggested here you could use an overlay network.

Thanks @DavidTurner!

I changed networks: -esnet to network_mode: host and now it works. /_cluster/health gives the cluster status 'green' and "Number of nodes: 3" so I think that's fine.

Let's commence with building the rest of the stack :slight_smile:

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.