Hello,
TLDR - Our certificate we use with the output-Elasticsearch plugin does not work on the input-Elasticsearch plugin despite being a valid certificate
We are currently attempting to connect to an existing index in our ES setup to grab already-parsed logs for reprocessing/dedupe operations. However using the following ES input configuration:
input {
elasticsearch {
hosts => ["<ES hosts we are connecting to>"]
ssl => true
ca_file => "/etc/logstash/certs/ca-chain.cert.pem"
user => "<username"
password => "<password>"
index => "<source index>"
query => '{ "sort": [ "_doc" ] }'
}
}
we run into the following error stack despite the specified .pem file being used in output directives without issue in other configurations:
[2021-11-09T17:53:37,972][ERROR][logstash.javapipeline ][cf_dedupe][a13600946a868fde8267e8615aee0f8243ae83d056df9647cc61e26ede6849f8] A plugin had an unrecoverable error. Will restart this plugin.
[SNIP]
Error: Received fatal alert: bad_certificate
and on the source ES server we see the following:
[2021-11-09T18:03:21,991][WARN ][o.e.h.AbstractHttpServerTransport] [dev-elastic-1] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=XXX, remoteAddress=XXX}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Empty client certificate chain
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
the .pem file being referenced for the input path has been successfully used in our output plugins to write to ES in perpetuity and when examining the file in question it has both root and intermediate CAs with valid/non-expired dates so it seems very puzzling that this is not working.
Has anyone experienced this particular issue or a variation on it and, if so, what did you end up doing to make it work?
Thank you,
Peter