Fatal alert: bad_certificate

Hi all,
I've enabled 1 year ago a cluster with X-Pack security enabled. Everything was fine and the cluster went in production. From a couple of days I'm receiving an error regarding a bad_certificate only for the second node. I've regenerated all the certificates with the tool elasticsearch-certutil but the problem still remains. This is the log output from 1 node of the cluster:

    [2021-04-26T17:23:13,020][WARN ][o.e.h.AbstractHttpServerTransport] [qcclienti01] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/192.168.1xx.11:9200, remoteAddress=/192.168.1xx.12:42846}
    io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
            at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
            at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
            at java.lang.Thread.run(Thread.java:832) [?:?]

this is the configuration part for xpack

    xpack.security.enabled: true
    xpack.security.http.ssl.enabled: true
    xpack.security.http.ssl.truststore.path: elastic-XXXXXX1.p12
    xpack.security.http.ssl.keystore.path: elastic-XXXXXX1.p12
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.keystore.path: elastic-XXXXXX1.p12
    xpack.security.transport.ssl.truststore.path: elastic-XXXXXX1.p12

The passwords are stored in the keystore/truststore

1 Like
  [2021-04-26T17:23:13,020][WARN ][o.e.h.AbstractHttpServerTransport] [qcclienti01] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/192.168.1xx.11:9200, remoteAddress=/192.168.1xx.12:42846}
    io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate

This probably isn't something that you can fix from the Elasticsearch side.

whatever client is running at 192.168.1xx.12 doesn't trust the certificate your Elasticsearch node is providing. You can't force them to trust it, so you need to work out what that client is, and how you can configure it to trust your CA.

The problem is that client 192.168.1xx.12 is an elastic node of this cluster. I've already changed the certificates for all of them

I see a similar problem in the logstash logs, I have not found a solution yet, only restarting the service helps logstash to work correctly. Analyzing the problem and looking for a solution to this problem. I'm waiting for this problem to be updated...

It's possible that it's another node, but it's making HTTP calls not transport protocol calls, so this is not standard cross-node traffic.
It's some sort of HTTP client - possibly monitoring, possibly watcher, or possibly something outside of Elasticsearch that happens to run on that same machine.

You're right, there was an auditbeat on the same server with an old ca certificate.
I hadn't noticed :slight_smile: my mistake

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.