Elasticsearch cluster error "client did not trust this server's certificate, closing connection"

Hi everyone.

It's the first time I'm setting up an Elasticsearch (7.8) cluster. The cluster is running in 3 RHEL 7.7 virtual machines in Azure.

The cluster runs just fine when there's no security enabled, but I start to run into problems when I try to enable security. To do this I followed these instructions.

When I try to see the status of the cluster, I get the following:

    $ curl -u "elastic:ChangeMe" 10.251.3.24:9200/_cluster/health/?pretty
    curl: (52) Empty reply from server

I then check the logs and this is what I get:

    org.elasticsearch.transport.RemoteTransportException: [lwe1elkpoc000001][10.251.3.24:9300][internal:cluster/coordination/join]
    Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {lwe1elkpoc000001}{dETj_n79RzSivbd7xoBF9A}{dPCkDFNGQ5u2hJ-BxXP7oA}{10.251.3.24}{10.251.3.24:9300}{dilmrt}{ml.machine_memory=3954180096, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}
            at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:447) ~[elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:526) ~[elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:489) ~[elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:376) ~[elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:363) ~[elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:476) ~[elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:129) ~[elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257) ~[?:?]
            at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315) ~[?:?]
            at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:63) ~[elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:801) ~[elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:695) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.8.0.jar:7.8.0]
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
            at java.lang.Thread.run(Thread.java:832) [?:?]
    [2020-07-10T19:03:05,253][WARN ][o.e.c.c.ClusterFormationFailureHelper] [lwe1elkpoc000001] master not discovered or elected yet, an election requires two nodes with ids [dETj_n79RzSivbd7xoBF9A, 52tWIV0cTuiInZgtUclw3g], have discovered [{lwe1elkpoc000001}{dETj_n79RzSivbd7xoBF9A}{dPCkDFNGQ5u2hJ-BxXP7oA}{10.251.3.24}{10.251.3.24:9300}{dilmrt}{ml.machine_memory=3954180096, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}, {lwe1elkpoc000004}{LOJ1cMApQXe4j_zL7ManyA}{zqxPGQLNSZuwAFoTxyc2Iw}{10.251.3.80}{10.251.3.80:9300}{dilmrt}{ml.machine_memory=3954180096, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}, {lwe1elkpoc000002}{52tWIV0cTuiInZgtUclw3g}{M8k6QTmGTZOj4j_qYu_flw}{10.251.3.55}{10.251.3.55:9300}{dilmrt}{ml.machine_memory=3954180096, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}] which is a quorum; discovery will continue using [10.251.3.55:9300, 10.251.3.80:9300] from hosts providers and [{lwe1elkpoc000001}{dETj_n79RzSivbd7xoBF9A}{dPCkDFNGQ5u2hJ-BxXP7oA}{10.251.3.24}{10.251.3.24:9300}{dilmrt}{ml.machine_memory=3954180096, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 77, last-accepted version 0 in term 0
Netty4TcpChannel{localAddress=/10.251.3.24:9300, remoteAddress=/10.251.3.55:51564}
    [2020-07-10T19:03:08,373][WARN ][o.e.x.c.s.t.n.SecurityNetty4Transport] [lwe1elkpoc000001] client did not trust this server's certificate, closing connection Netty4TcpChannel{localAddress=/10.251.3.24:9300, remoteAddress=/10.251.3.80:38914}
    [2020-07-10T19:03:08,744][INFO ][o.e.c.c.JoinHelper       ] [lwe1elkpoc000001] failed to join {lwe1elkpoc000001}{dETj_n79RzSivbd7xoBF9A}{dPCkDFNGQ5u2hJ-BxXP7oA}{10.251.3.24}{10.251.3.24:9300}{dilmrt}{ml.machine_memory=3954180096, xpack.installed=true, transform.node=true, ml.max_open_jobs=20} with JoinRequest{sourceNode={lwe1elkpoc000001}{dETj_n79RzSivbd7xoBF9A}{dPCkDFNGQ5u2hJ-BxXP7oA}{10.251.3.24}{10.251.3.24:9300}{dilmrt}{ml.machine_memory=3954180096, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}, minimumTerm=76, optionalJoin=Optional[Join{term=77, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={lwe1elkpoc000001}{dETj_n79RzSivbd7xoBF9A}{dPCkDFNGQ5u2hJ-BxXP7oA}{10.251.3.24}{10.251.3.24:9300}{dilmrt}{ml.machine_memory=3954180096, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}, targetNode={lwe1elkpoc000001}{dETj_n79RzSivbd7xoBF9A}{dPCkDFNGQ5u2hJ-BxXP7oA}{10.251.3.24}{10.251.3.24:9300}{dilmrt}{ml.machine_memory=3954180096, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}}]}
    org.elasticsearch.transport.RemoteTransportException: [lwe1elkpoc000001][10.251.3.24:9300][internal:cluster/coordination/join]
    Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {lwe1elkpoc000001}{dETj_n79RzSivbd7xoBF9A}{dPCkDFNGQ5u2hJ-BxXP7oA}{10.251.3.24}{10.251.3.24:9300}{dilmrt}{ml.machine_memory=3954180096, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}
            at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:447) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:526) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:489) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:376) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:363) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:476) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:129) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257) [x-pack-security-7.8.0.jar:7.8.0]
            at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315) [x-pack-security-7.8.0.jar:7.8.0]
            at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:63) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:801) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:695) [elasticsearch-7.8.0.jar:7.8.0]
            at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.8.0.jar:7.8.0]
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
            at java.lang.Thread.run(Thread.java:832) [?:?]

This is the /etc/hosts file:

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

    lwe1elkpoc000001     10.251.3.24
    lwe1elkpoc000002     10.251.3.55
    lwe1elkpoc000004     10.251.3.80

This is the /etc/elasticsearch/elasticsearch.yml file:

    path:
      logs: /elastic_data/log
      data: /elastic_data/data

    cluster.name: "azbnl-elasticsearch"
    node.name: "lwe1elkpoc000001"

    network.host: "10.251.3.24"
    http.port: 9200

    discovery.seed_hosts:
      - lwe1elkpoc000001
      - lwe1elkpoc000002
      - lwe1elkpoc000004

    cluster.initial_master_nodes:
      - lwe1elkpoc000001
      - lwe1elkpoc000002

    xpack.security.enabled: true
    xpack.security.http.ssl.enabled: true
    xpack.security.transport.ssl.enabled: true
    xpack.security.http.ssl.key: certs/lwe1elkpoc000001.key
    xpack.security.http.ssl.certificate: certs/lwe1elkpoc000001.crt
    xpack.security.http.ssl.certificate_authorities: certs/ca.crt
    xpack.security.transport.ssl.key: certs/lwe1elkpoc000001.key
    xpack.security.transport.ssl.certificate: certs/lwe1elkpoc000001.crt
    xpack.security.transport.ssl.certificate_authorities: certs/ca.crt
    xpack.security.transport.ssl.verification_mode: none

The other 2 nodes have a similar configuration file (apart from the nodes names).

Having followed the instructions from the website, I don't know what's the problem. Could you guys give me a hand?

Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.