Kibana is a server and requires a certificate to use for https when clients connect to it. This is what we were discussing above but why did you create a certificate and key for Logstash. The only reason you need this is to do TLS client authentication of Logstash to Elasticsearch but your logstash's Elasticsearch output plugin configuration shows you don't do that.
This is exactly the same error you encountered before and I explained above what that means:
When you use this command to generate a key and certificate for Kibana, then you need to use the hostname or FQDN of kibana. This is however irrelevant to your logstash problem, see again my answer with regards to the the kibana certificate and if there are any questions on that we can discuss in a separate answer.
Take a step back from the above and let's focus on your logstash issues. Logstash attempts to communicate to Elasticsearch over https on port 9200. Elasticsearch is configured for TLS on the http layer ( you never showed your config but I assume from the errors ) with:
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: certs/elastic-certificates.p12
This elastic-certificates.p12
contains the cert and the key that Elasticsearch uses for TLS on the http layer. Since you didn't provide a dns name when you ran the
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
command, the certificate was created with a default subject of CN=instance
.
For TLS, that means that when a client connects over https, Elasticsearch says "Hi, I'm CN=instance
, this is my certificate"
Que to Logstash now. Same applies for monitoring and the Elasticsearch output plugin as your config is similar, but lets look at the output plugin as an example. You have it configured with
sl => true
ssl_certificate_verification => true
cacert => "/etc/logstash/keys/elastic-stack-ca.pem"
action => "index"
hosts => ["elasticnodehostname"]
This tells the plugin to connect to https://elasticnodehostname:9200
and use etc/logstash/keys/elastic-stack-ca.pem
to verify Elasticsearch's certificate. What happens is that the plugin connects to https://elasticnodehostname:9200
and Elasticsearch replies with "Hi, I'm CN=instance
, this is my certificate". The plugin can verify the certificate's authenticity as it is signed by the /etc/logstash/keys/elastic-stack-ca.pem
CA certificate, but hostname verification fails. The plugin connects to elasticnodehostname
and Elasticsearch presents a certificate that says it is CN=instance
.
Hope the above helps with understanding what the issue is.
To solve it, you need to make sure that the certificate that is included in the certs/elastic-certificates.p12
that Elasticsearch uses have a correct DNS SAN in it so that it matches its hostname/FQDN. So for example if your Elasticsearch is reached at https://my.elasticsearch.com:9200
, recreate elastic-certificates.p12
with
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --dns my.elasticsearch.com