TLS configuration


I ran out of ideas.
I have kibana (my wildcard cert, * Kibana is ok.

server.publicBaseUrl: ""

elasticsearch.hosts: ["", "", "", "",""]

elasticsearch.username: "kibana_system"
elasticsearch.password: "psw"

server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/hostname.net_cert.cer
server.ssl.key: /etc/kibana/

elasticsearch.ssl.certificate: /etc/kibana/hostname.net_cert.cer
elasticsearch.ssl.key: /etc/kibana/
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/MyCA.cer" ]
elasticsearch.ssl.verificationMode: full

I have 4 elastic node, they comunicate in tls using my real wildcard cert. And they work..
elasticsearch.yml true true
  enabled: true
  key: certs/
  certificate: certs/hostname.net_cert.cer
  certificate_authorities: certs/MyCa.cer
  verification_mode: certificate
  enabled: true
  key: certs/
  certificate: certs/hostname.net_cert.cer
  certificate_authorities: certs/MyCa.cer
  verification_mode: certificate

logstash didn't work, i have an error:
io.netty.handler.codec.DecoderException: Received fatal alert: bad_certificate

I configure logstash on logstash.yml and i think that this configuration are ok because logstash comunication with elks use my wildcard cert.

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: psw
xpack.monitoring.elasticsearch.hosts:  ["", "", "", "",""]
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/etc/logstash/certs/MyCa.cer"

now i explain my probably problem.
I can't use my wildcard with all my vms (that use different certs) so i would use selfsigned cert between logstash and filebeat (I deploy that on single "other" vms).
So I used certutils on my first elk node, it used like my ca selfsigned. So i have ca.crt and ca.key (on elk1), on elk1 I generate cert for logstash (logstash.crt and logstash.key -> logstash.pkcs8.key)

using this command:
./bin/elasticsearch-certutil ca --pem --> gen ca cert/key
./bin/elasticsearch-certutil cert --name logstash-prod --ca-cert ca.crt --ca-key ca.key --ip --pem --> gen logstash crt/key

then I copy on logstash the ca.crt

my pipeline config:

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate_authorities => [ "ca.crt" ]
    ssl_certificate => "logstash.crt"
    ssl_key => "logstash.pkcs8.key"
filter{doesn't matter}
output {
      elasticsearch {
        hosts => ["", "", "", ""]
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}-%{[service][type]}"
        action => "create"
        user => "elastic"
        password => "psw"
        cacert => "MyCA.cer"

and when it will work i will install filebeat on "other" vm I generate crt/key on elk1 and copy that only ca.crt and cert/key selfsigned

On my dev env (elk7) it works, obviusly but I am not able with prod env (elk8)

Which logfile is reporting that error? It looks like it's probably the Elasticsearch log - if so, then what error do you get in logstash?

thank you for your answer.
It's the logstash error log. I don't have any errors on elk (kibana or filebeat)

Are you sure? Based on the error message, that is extremely unlikely to be true.

Received fatal alert: bad_certificate means that the other side of the connection rejected this side's certificate (but the TLS handshaking protocol doesn't provide a way for the reason to be included, so all we know is that something is wrong with the certificate).

So if logstash is reporting this in its log file, then some other process that it is communicating with is rejecting logstash's certificate. That other process should be providing details of that rejection in its own log file. It is almost impossible to diagnose this from the logstash side because the details are all on the other side.

So, either:

  1. This error you posted is from the Elasticsearch log, which means something (probably Logstash) is rejecting Elasticsearch's certificate, and we need to find the log file that gives us the detail
  2. Beats is rejecting Logstash's certificate (which might be the case - you haven't shown any beats configuration), but we would expect the logs for filebeat to provide an explanation of the problem.
  3. Elasticearch is rejecting Logstash's certificate, but based on the configuration you've posted Elasticsearch isn't requesting a certificate from Logstash, so this is unlikely.

My guess is it's option 2.
Can you:

  1. Double check the log files for any beats you are running.
  2. Post the config you're using for filebeat (or any other beats)

ok, I understand what you mean, I explained wrong or not completely.
The bad certificate is on the logstash error log and it refers to the communication beetween logstash (pipeline input) and beats because filebeats send log to logstash (input), filter them and then send them to elk cluster (using my real certificate, not self-signing)

So the bad certificate is the selfsigned created by certutils.

I tried to explain my ssl configuration in my head. I hope it's correct.

I use the first elk node as CA, with certutils.

bin/elasticsearch-certutil ca --pem -> create ca crt/key
./bin/elasticsearch-certutil cert --name logstash-prod --ca-cert ca.crt --ca-key ca.key --ip --pem --> create cert/key for logstash using CA

then on my ca (first elk node) generate each crt/key for my clients where it install filebeat and I copy ca.crt on my client to verify my couple cert

my filebeat.yml

# ------------------------------ Logstash Output -------------------------------

  hosts: [""]

  ssl.certificate_authorities: ["/etc/ssl_filebeat/elk-prod-ca.crt"] -> ca.crt create with first command 
  ssl.certificate: "/etc/ssl_filebeat/from_ansy_to_filebeat.crt" -> create with command above
  ssl.key: "/etc/ssl_filebeat/from_ansy_to_filebeat.key"  -> create with command above

this command, then I copy this certs on vm client with filebeat:

        /usr/share/elasticsearch/bin/elasticsearch-certutil cert
        --ca-cert /etc/elasticsearch/self-certs/elk-prod-ca/elk-prod-ca.crt
        --ca-key /etc/elasticsearch/self-certs/elk-prod-ca/elk-prod-ca.key
        --ca-pass ""
        --pem --in /tmp/cert_ansible/instance_from_ansy.yml
        --out /tmp/cert_ansible/

It's not easy to explain these things, I hope I've made myself clear :sweat_smile:


i filtred my filebeat log and the error on logstash is:

[2024-04-23T12:25:30,853][WARN ][][main][26c21825ff418a22c05ac9c898aa9f378fe734fb3408f928deebea98f3c2d945] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: Received fatal alert: bad_certificate

on filebeat log:

{"log.level":"error","@timestamp":"2024-04-23T12:34:21.527+0200","log.logger":"publisher_pipeline_output","log.origin":{"":"pipeline/client_worker.go","file.line":148},"message":"Failed to connect to backoff(async(tcp:// x509: certificate is not valid for any names, but wanted to match","":"filebeat","ecs.version":"1.6.0"}

so where did I go wrong

You should provide


when you generate the server certificate for Logstash