"bad_certificate" error on elasticsearch input plugin

Hello,

TLDR - Our certificate we use with the output-Elasticsearch plugin does not work on the input-Elasticsearch plugin despite being a valid certificate

We are currently attempting to connect to an existing index in our ES setup to grab already-parsed logs for reprocessing/dedupe operations. However using the following ES input configuration:

input {
  elasticsearch {
    hosts => ["<ES hosts we are connecting to>"]
    ssl => true
    ca_file => "/etc/logstash/certs/ca-chain.cert.pem"
    user => "<username"
    password => "<password>"
    index => "<source index>"
    query => '{ "sort": [ "_doc" ] }'
  }
}

we run into the following error stack despite the specified .pem file being used in output directives without issue in other configurations:

[2021-11-09T17:53:37,972][ERROR][logstash.javapipeline    ][cf_dedupe][a13600946a868fde8267e8615aee0f8243ae83d056df9647cc61e26ede6849f8] A plugin had an unrecoverable error. Will restart this plugin.
[SNIP]
  Error: Received fatal alert: bad_certificate

and on the source ES server we see the following:

[2021-11-09T18:03:21,991][WARN ][o.e.h.AbstractHttpServerTransport] [dev-elastic-1] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=XXX, remoteAddress=XXX}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Empty client certificate chain
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]

the .pem file being referenced for the input path has been successfully used in our output plugins to write to ES in perpetuity and when examining the file in question it has both root and intermediate CAs with valid/non-expired dates so it seems very puzzling that this is not working.

Has anyone experienced this particular issue or a variation on it and, if so, what did you end up doing to make it work?

Thank you,
Peter

That seems to be the elasticsearch server complaining that a client did not send a certificate to authenticate the client. I do not know whether logstash can be configured to send a client certificate.

I figured with the ca-file option that it would send a certificate. Taking a step back to give some context we're attempting to scan an existing index for duplicates and then port the dupes into a new index per this documentation which suggested the elasticsearch plugin for gathering the existing dataset, running the fingerprint filter, and then pushing tagged entries to a new index. Typically our logs on our infrastructure are using Kafka and then porting into ES so this is our first attempt to use the elasticsearch plugin to reprocess existing logs as oppose to ingesting from another source.

No, it just configures the chain used to verify the certificate that the elasticsearch server presents.

Is there any way to provide a client certificate within the options of this plugin so that reads against the cluster for reprocessing on existing/filtered logs are secure? From the outline in here the ca_file option seemed like it was the option I was looking for, but if there is another option (or plugin, for that matter) to pull indexed logs back in for reprocessing then any pointers would be helpful. It seems like a major limitation if Logstash can't pull from ES if it's secured without an option to provide a key/cert...

I do not think so. See here, here, here and here.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.