Logstash show errors for new client in log: "javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate"

Dears,

I need your support in case of configuration filebeat/logstash.
filebeat 7.17.0
logstash 7.16.3

Previously, it was an ELK cluster in version 7.7.1 which I upgraded to version 7.16.3

Certificate preparation on the first node of ELK cluster (elkn1):

cat filebeats.yml
instances:
  - name: 'new_klient'
    dns: [ 'new_klient' ]
    ip: [ '192.168.1.100' ]

/usr/share/elasticsearch/bin/elasticsearch-certutil cert --pem --ca-cert /etc/elasticsearch/certs/ca.crt --ca-key /root/.certs/ca.key --in /tmp/filebeats.yml --out /tmp/certs/filebeats.zip

Filebeat configuration:

filebeat.config.inputs:
  path: ${path.config}/conf.d/*.yml
  reload.enabled: true

output.logstash:
  hosts: ["elkn1:5044", "elkn2:5044", "elkn3:5044"]
  loadbalance: true

  ssl.certificate_authorities: ["/etc/filebeat/certs/ca.crt"]
  ssl.certificate: "/etc/filebeat/certs/new_klient.crt"
  ssl.key: "/etc/filebeat/certs/new_klient.key"

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
  - add_locale:
      format: abbreviation
  - add_id: ~

logging.level: debug
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644

Logstash configuration:

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate_authorities => ["/etc/logstash/certs/ca.crt"]
    ssl_certificate => "/etc/logstash/certs/${HOSTNAME}.crt"
    ssl_key => "/etc/logstash/certs/${HOSTNAME}.p8"
    ssl_verify_mode => "force_peer"
  }
}
...
output {
  elasticsearch {
    hosts => ["https://${HOSTNAME}:9200"]
    cacert => '/etc/logstash/certs/ca.crt'
    user => 'logstash_internal'
    password => '${ES_PWD}'
    ilm_enabled => false
    document_id => "%{[@metadata][_id]}"
    index => "%{[@metadata][index_prefix]}"
  }
}

Filebeat show such errors in log file:

2022-02-14T10:40:40.319+0100    INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(async(tcp://elkn3:5044))
2022-02-14T10:40:40.320+0100    INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(async(tcp://elkn1:5044))
2022-02-14T10:40:40.320+0100    INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(async(tcp://elkn2:5044))
2022-02-14T10:40:40.345+0100    INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(async(tcp://elkn2:5044)) established
2022-02-14T10:40:40.346+0100    DEBUG   [logstash]      logstash/async.go:172   2 events out of 2 events sent to logstash host elkn2:5044. Continue sending
2022-02-14T10:40:41.584+0100    ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(async(tcp://elkn3:5044)): x509: certificate is valid for elkn3, not elkn2
2022-02-14T10:40:42.159+0100    ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(async(tcp://elkn1:5044)): x509: certificate is valid for elkn1, not elkn2

Logstash show such error on two nodes 1 and 3 (elkn1/elkn3):

[2022-02-14T09:37:00,005][INFO ][org.logstash.beats.BeatsHandler][main][05f334767d979cd3f20cc63a381266fd4a7ab6a18fe7b66ada8418a3f356b974] [local: 0.0.0.0:5044, remote: 192.168.1.100:39352] Handling exception: io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate (caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate)
[2022-02-14T09:37:00,010][WARN ][io.netty.channel.DefaultChannelPipeline][main][05f334767d979cd3f20cc63a381266fd4a7ab6a18fe7b66ada8418a3f356b974] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:477) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
        at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:336) ~[?:?]
        at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293) ~[?:?]
        at sun.security.ssl.TransportContext.dispatch(TransportContext.java:185) ~[?:?]
        at sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:681) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:636) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:454) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:433) ~[?:?]
        at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:637) ~[?:?]
        at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:298) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1338) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1234) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1280) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        ... 17 more

Any idea what is wrong?

Best Regards,
Dan

The problem was caused by too higher version of filebeat to ELK stack. I did filebeat downgrade to version 7.16.3 and problem not exists right now. Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.