Docker swarm multi-node ElasticSearch cluster error

I'm trying to setup docker swarm elasticsearch multi-node cluster. Here is my compose file:

elasticsearch:
        image: elastic/elasticsearch:8.13.0
        hostname: elasticsearch
        environment:
            - cluster.name=swarm-cluster
            - ELASTIC_PASSWORD=tglist
            - discovery.seed_hosts='10.0.0.7:9301'
            - cluster.initial_master_nodes=['10.0.0.10:9300','10.0.0.7:9301']
            - bootstrap.memory_lock=true
            - network.publish_host=_eth1_ 
            - xpack.security.enabled=false
            - 'ES_JAVA_OPTS=-Xms4g -Xmx4g'
        ports:
            - target: 9200
              published: 9200
              protocol: tcp
              mode: ingress
            - target: 9300
              published: 9300
              protocol: tcp
              mode: ingress
        volumes:
            - /mnt/HC_Volume_100587668/elasticsearch/data:/usr/share/elasticsearch/data
        ulimits:
            memlock:
                soft: -1
                hard: -1
        deploy:
            replicas: 1
            placement:
                constraints:
                    - node.hostname==main

    elasticsearch2:
        image: elastic/elasticsearch:8.13.0
        hostname: elasticsearch2
        environment:
            - cluster.name=swarm-cluster
            - xpack.security.enabled=false
            - ELASTIC_PASSWORD=tglist
            - discovery.seed_hosts='10.0.0.10:9300'
            - cluster.initial_master_nodes=['10.0.0.10:9300','10.0.0.7:9301']
            - bootstrap.memory_lock=true
            - network.publish_host=_eth1_
            - 'ES_JAVA_OPTS=-Xms8g -Xmx8g'
        volumes:
            - ./elasticsearch/data:/var/lib/postgresql/data
            - type: tmpfs
              target: /dev/shm
        ports:
            - target: 9200
              published: 9201
              protocol: tcp
              mode: ingress
            - target: 9300
              published: 9301
              protocol: tcp
              mode: ingress
        deploy:
            replicas: 1
            placement:
                constraints:
                    - node.hostname==main2
        ulimits:
            memlock:
                soft: -1
                hard: -1

But I keep getting errors like this:

{"@timestamp":"2024-05-07T17:34:06.499Z", "log.level": "WARN", "message":"completed handshake with [{elasticindices}{qheyn8RsQfSbpsI1U9aNpg}{n_7zTzzeQnGcp4YrfV9-ag}{elasticindices}{10.0.1.174}{10.0.1.174:9300}{cdfhilmrstw}{8.13.0}{7000099-8503000}] at [10.0.0.10:9300] but followup connection to [10.0.1.174:9300] failed", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticindices2][generic][T#4]","log.logger":"org.elasticsearch.discovery.HandshakingTransportAddressConnector","elasticsearch.node.name":"elasticindices2","elasticsearch.cluster.name":"swarm-cluster","error.type":"org.elasticsearch.transport.ConnectTransportException","error.message":"[elasticindices][10.0.1.174:9300] connect_exception","error.stack_trace":"org.elasticsearch.transport.ConnectTransportException: [elasticindices][10.0.1.174:9300] connect_exception\n\tat org.elasticsearch.server@8.13.0/org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1144)\n\tat org.elasticsearch.server@8.13.0/org.elasticsearch.action.support.SubscribableListener$FailureResult.complete(SubscribableListener.java:378)\n\tat org.elasticsearch.server@8.13.0/org.elasticsearch.action.support.SubscribableListener.tryComplete(SubscribableListener.java:290)\n\tat org.elasticsearch.server@8.13.0/org.elasticsearch.action.support.SubscribableListener.setResult(SubscribableListener.java:315)\n\tat org.elasticsearch.server@8.13.0/org.elasticsearch.action.support.SubscribableListener.onFailure(SubscribableListener.java:234)\n\tat org.elasticsearch.transport.netty4@8.13.0/org.elasticsearch.transport.netty4.Netty4Utils.lambda$addListener$2(Netty4Utils.java:178)\n\tat io.netty.common@4.1.94.Final/io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)\n\tat io.netty.common@4.1.94.Final/io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)\n\tat io.netty.common@4.1.94.Final/io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)\n\tat io.netty.common@4.1.94.Final/io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)\n\tat io.netty.common@4.1.94.Final/io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)\n\tat io.netty.common@4.1.94.Final/io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)\n\tat io.netty.common@4.1.94.Final/io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)\n\tat io.netty.transport@4.1.94.Final/io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321)\n\tat io.netty.transport@4.1.94.Final/io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337)\n\tat io.netty.transport@4.1.94.Final/io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)\n\tat io.netty.transport@4.1.94.Final/io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689)\n\tat io.netty.transport@4.1.94.Final/io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652)\n\tat io.netty.transport@4.1.94.Final/io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)\n\tat io.netty.common@4.1.94.Final/io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)\n\tat io.netty.common@4.1.94.Final/io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat java.base/java.lang.Thread.run(Thread.java:1570)\nCaused by: org.elasticsearch.common.util.concurrent.UncategorizedExecutionException: Failed execution\n\tat org.elasticsearch.server@8.13.0/org.elasticsearch.action.support.SubscribableListener.wrapAsExecutionException(SubscribableListener.java:271)\n\tat org.elasticsearch.server@8.13.0/org.elasticsearch.common.util.concurrent.ListenableFuture.wrapException(ListenableFuture.java:38)\n\tat org.elasticsearch.server@8.13.0/org.elasticsearch.common.util.concurrent.ListenableFuture.wrapException(ListenableFuture.java:27)\n\t... 18 more\nCaused by: java.util.concurrent.ExecutionException: io.netty.channel.AbstractChannel$AnnotatedNoRouteToHostException: No route to host: 10.0.1.174/10.0.1.174:9300\n\t... 21 more\nCaused by: io.netty.channel.AbstractChannel$AnnotatedNoRouteToHostException: No route to host: 10.0.1.174/10.0.1.174:9300\nCaused by: java.net.NoRouteToHostException: No route to host\n\tat java.base/sun.nio.ch.Net.pollConnect(Native Method)\n\tat java.base/sun.nio.ch.Net.pollConnectNow(Net.java:682)\n\tat java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:1060)\n\tat io.netty.transport@4.1.94.Final/io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337)\n\tat io.netty.transport@4.1.94.Final/io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)\n\tat io.netty.transport@4.1.94.Final/io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)\n\tat io.netty.transport@4.1.94.Final/io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689)\n\tat io.netty.transport@4.1.94.Final/io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652)\n\tat io.netty.transport@4.1.94.Final/io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)\n\tat io.netty.common@4.1.94.Final/io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)\n\tat io.netty.common@4.1.94.Final/io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat java.base/java.lang.Thread.run(Thread.java:1570)\n"}

I've tried different combinations of publish_host and network settings for this setup. Unfortunately, nothing has worked.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.