Hi All,
We are running:
Kibana 7.4.0
Elastic 7.4.0
Logstash 7.4.0
Filebeat (33 instances) 6.8.3
We found out we were missing some log lines from two different filebeat clients at different times. After some searching, the only thing we were able to find were errors in Logstash, every 30 sec about:
[2020-01-23T14:06:28,817][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:5044, remote: 172..0.57:48228] Handling exception: javax.net.ssl.SSLHandshakeException: error:10000412:SSL routines:OPENSSL_internal:SSLV3_ALERT_BAD_CERTIFICATE
[2020-01-23T14:06:28,818][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: error:10000412:SSL routines:OPENSSL_internal:SSLV3_ALERT_BAD_CERTIFICATE
After looking on the remote host we found out that this host had a old proces with a old expired certificate. After killing and starting again the SSL errors were gone on logstash.
The real question is:
Is it possible that when the client with the expired certificate tries to connect the SSL engine breaks and shuts the connection from another client so that some lines are missing from the second client?
Regards, Daniel