Client did not trust this server's certificate for the Cluster

I have been working on this for a while now and I still havn't been able to figure it out.

I have SSL working from Logstash to ElasticSearch, and from Kibana to ElasticSearch, but those settings don't work for adding nodes to my ElasticSearch cluster.

Here is my elasticsearch.yml

cluster.name: logstash-es.example.com
node.name: logstash-es01.ec2.example.com
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
path.repo: /var/lib/elasticsearch/backup
network.host: 0.0.0.0
#network.host: logstash-es01.ec2.example.com
transport.tcp.port: 9300
transport.tcp.compress: true
http.enabled: true
discovery.zen.hosts_provider: "ec2"
discovery.ec2.host_type: "private_ip"
discovery.ec2.endpoint: ec2.us-west-2.amazonaws.com
discovery.ec2.availability_zones: "us-west-2a,us-west-2b,us-west-2c"
discovery.ec2.groups: "Logstash Elasticsearch"
cluster.routing.allocation.awareness.attributes: aws_availability_zone
cloud.node.auto_attributes: true

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key:  /etc/elasticsearch/ssl/logstash-es01.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/ssl/logstash-es01.crt 
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/ssl/ca.example.com.crt" ]
#These Settings should be uncommented
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.key:  /etc/elasticsearch/ssl/logstash-es01.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/ssl/logstash-es01.crt
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/ssl/ca.example.com.crt" ]
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.verification_mode: full

#These settings need to be commented out
#xpack.ssl.verification_mode: full
#xpack.ssl.key:  /etc/elasticsearch/ssl/logstash-es01.key
#xpack.ssl.certificate: /etc/elasticsearch/ssl/logstash-es01.crt
#xpack.ssl.certificate_authorities: [ "/etc/elasticsearch/ssl/ca.example.com.crt" ]
#xpack.ssl.client_authentication: required

As I said, everything from Logstash and Kibana to ElasticSearch works fine, but when I try and bring up logstash-es02 or logstash-es03 I receive the following error:

[2019-04-29T20:16:10,829][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [logstash-es01.ec2.example.com] client did not trust this server's certificate, closing connection NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:9300, remoteAddress=/10.1.39.247:34948}

Is there any way to find out why the client doesn't trust the server's cert? Is it a name thing, is it the fact that I am chaining? Does it just not like me?

I have specifically rebuilt the certs to make sure all of the names match, and everything is happy as far as Logstash and Kibana go, but I haven't figured out why ElasticSearch is failing.

If I comment out:

#These Settings should be uncommented
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.key:  /etc/elasticsearch/ssl/logstash-es01.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/ssl/logstash-es01.crt
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/ssl/ca.example.com.crt" ]
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.verification_mode: full

And I uncomment:

#These settings need to be commented out
xpack.ssl.verification_mode: full
xpack.ssl.key:  /etc/elasticsearch/ssl/logstash-es01.key
xpack.ssl.certificate: /etc/elasticsearch/ssl/logstash-es01.crt
xpack.ssl.certificate_authorities: [ "/etc/elasticsearch/ssl/ca.example.com.crt" ]
xpack.ssl.client_authentication: required

Then everything works. Logstash and Kibana are happy, and I am able to join my other nodes to the cluster. But from what I was reading xpack.ssl.{key,certificate,certificate_authorities,client_authentication} are for LDAP, which I am not using. So I should be using xpack.security.transport.ssl.{enabled,key,certificate,certificate_authorites,client_authentication}

The commands that I am using to create my certs are as follows:

openssl genrsa -out logstash-es02.key 4096
openssl req -new -nodes -key logstash-es02.key -out logstash-es02.csr -config logstash.conf
openssl x509 -req -in logstash-es02.csr -CA CACert.pem -CAkey CAKey.pem -CAcreateserial -out logstash-es02.crt -days 900
openssl req -noout -text -in logstash-es.csr

I made sure that my CN in the logstash.conf matches the server name.
All the settings are identical between my ElasticSearch nodes, other than I increment the number 01, 02, 03, etc...

I have added multiple alias to the Java keychain store as well.

logstash-es02.ec2.example.com
logstash-es02.example.com
logstash-es.example.com

At one point I added in the IP addresses and multiple different DNS names to the cert as well hoping that SAN would fix it. But I still receive the same error.

Any suggestions would be greatly appreciated.

Does that remote address correspond to one of the nodes in your cluster? If so, then you need to check the logs on that node.
When a client rejects a certificate it doesn't provide a lot of detail, you always need to check the logs on the client-side to get the full picture.

Yes, that address is another node in the cluster. I wonder if the problem is that the node is connecting with an IP rather than DNS. Is there a way that I can force DNS on the nodes when they connect?

What logs on the other node?
Even if I turn the logging of ElasticSearch up to trace, only one server logs events for the entire cluster.....well unless there is a problem with an index.

Alright, I finally found the problem.

discovery.ec2.host_type: "private_ip"

https://www.elastic.co/guide/en/elasticsearch/plugins/current/_settings.html

This is sending the IP address rather than the DNS name of the server. This is what is causing the cert to fail.
Once again, I have attempted to use SAN and add the IP address, but for some reason it doesn't care about SAN.

I have a new problem of figuring out how to fix this properly. But that is an infrastructure (firewalls or DNS) issue that needs to be resolved.

Okay, so I was wrong.....well partially wrong.

One of the issues was that I was pulling the IP from the discovery.ec2.host_type. I have since fixed that by removing all of the EC2 stuff and going with a straight:

discovery.zen.ping.unicast.hosts: ["logstash-es01.ec2.example.com", "logstash-es02.ec2.example.com", "logstash-es03.ec2.example.com"]

I then added each of those to the hosts file to make sure that resolution happens exactly the way I want it to.

The bad new is, this still does not work.

The good news is, I finally have an error that I was able to pull from one of my nodes:

[2019-05-03T16:08:37,838][WARN ][o.e.x.s.t.n.exampleNetty4ServerTransport] [logstash-es02.ec2.example.com] exception caught on transport layer [NettyTcpChannel{localAddress=0.0.0.0/0.0.0.0:44370, remoteAddress=10.249.27.68/10.249.27.68:9300}], closing connection
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-codec-4.1.30.Final.jar:4.1.30.Final]
        
        
        Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
        at sun.example.ssl.Handshaker.checkThrown(Handshaker.java:1521) ~[?:?]
        at sun.example.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:528) ~[?:?]
        at sun.example.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:802) ~[?:?]
        at sun.example.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:766) ~[?:?]
        at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624) ~[?:1.8.0_191]
        
        Caused by: java.example.cert.CertificateException: No subject alternative names present
        at sun.example.util.HostnameChecker.matchIP(HostnameChecker.java:145) ~[?:?]
        at sun.example.util.HostnameChecker.match(HostnameChecker.java:94) ~[?:?]
        at sun.example.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) ~[?:?]
        at sun.example.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) ~[?:?]
        at sun.example.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) ~[?:?]
        at sun.example.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) ~[?:

So for some reason the xpack.security.transport.ssl will only use IP addresses. I do not know why it is only using IP addresses. Is there a reason that it is only using IP addresses? Can I fix this reason?

Okay, so lets try and go down the path of adding a Subject Alternative Name to my certs.
I generate my certs as such:

openssl genrsa -out logstash-es02.key 4096
openssl req -new -nodes -key logstash-es02.key -out logstash-es02.csr -config es02.conf
openssl x509 -req -in logstash-es02.csr -CA CACert.pem -CAkey CAKey.pem -CAcreateserial -out logstash-es02.crt -days 90

My conf file is this:

[req]
default_bits = 4096
default_md = sha512
distinguished_name = dn
req_extensions = req_ext
prompt = no

[ dn ]
C=US
ST=NY
L=Gothan
O=Example
OU=Batman Tech Support
emailAddress=devops@example.com
CN=logstash-es02.ec2.example.com

[ req_ext ]
subjectAltName = @alt_names
extendedKeyUsage = serverAuth,clientAuth

[ alt_names ]
DNS.1 = logstash-es02.ec2.example.com
IP.1 = 10.249.39.247

I test my CSR by running:

openssl req -noout -text -in logstash-es02.csr

Please note the section that says:

        Attributes:
        Requested Extensions:
            X509v3 Subject Alternative Name: 
                DNS:logstash-es02.ec2.example.com, IP Address:10.249.39.247
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication

Yet, I still get the same SAN error. I even tried using the elasticsearch-certgen --csr to create the key and csr request. And I still get the same SAN error.
Please note that I go through this process for all of my nodes in the cluster with the appropriate names and IP addresses.

Does anyone have the xpack.security.transport.ssl working?
Is this what I am supposed to be using for node to node secure communications?

The next thing that I am going to try is to remove the signing request, and see if it works with just a straight cert. I'm vaguely hopeful that it won't work, else I will be in the same boat that I am in with Lumberjack.

Any suggestions would be greatly appreciated.

I assume these exceptions are literal copy-and-paste from your logs?

If so, can you tell me what JDK you're using?
Those classnames are not what you'd typically see on the JDKs we test wih & support.

You are correct that it was a copy and paste from my logs. I changed some info to protect the innocent :wink:

[batman][elasticsearch][elasticsearch][aws][/home/example]# java -version
openjdk version "1.8.0_191"
OpenJDK Runtime Environment (build 1.8.0_191-b12)
OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
[batman][elasticsearch][elasticsearch][aws][/home/example]

That is the version that I am running on all of my ELK servers (Filebeat, Logstash, Kibana, Elastic Search).

Thanks. If you do redact information in logs or error messages, please be explciit about it. It's very hard to provide accurate assistance if we have to guess which information we can rely on.

The TLS connection will verify the SAN against the network address that it used for the connection. Whether that is an IP or a DNS name is dependent on how you configure your nodes to publish their addresses to the cluster.
Typically what you need is to set network.hoston each node.

However,

If you are getting that exception, then the presented certificate does not have any SANs in it. It's no that they don't match, it's that they don't exist.

What does

openssl x509 -in logstash-es02.crt -noout -text

give you?

Is there a reason you are choosing to use openssl instead of elasticsearch-certutil for this? cerutil is designed to make these things easy.

The only changes that I make are to the actual domain names. But I will be sure to add a redacted note to all of my future posts.

My network.hosts have been:

#network.host: [_eth0_,_local_]
#network.host: 0.0.0.0
#network.host: 127.0.0.1
network.host: logstash-es02.ec2.example.com
#network.host: logstash-es.example.com

Domains changed to protect the innocent.

So currently my network.host matches the DNS name of the server.

In one of my previous posts I put the snippet of my SAN output from that command.

```
 Attributes:
        Requested Extensions:
            X509v3 Subject Alternative Name: 
                DNS:logstash-es02.ec2.example.com, IP Address:10.249.39.247
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication

Domains changed to protect the innocent.

The reason that I am using openssl is I am not the one that generates the production certs. For my dev and test environments I am generating certs using the same tools and commands that we use in production, but I am changing the information.
Also, from one of my previous posts, I did try and the Elasticsearch-certgen --csr and had the exact same SAN error.

When I checked the CSR from the Elasticsearch-certgen --csr, the only thing I noticed different was that IP came before DNS in the SAN section. But since I was unable to get those certs to work, I didn't bother try switching the order on my openssl config file.

No, you provided the output for your CSR not your Certificate (.crt).
The CRT is the thing that matters, we need to see what's in your actual certificate. The CSR only tells us what you requested, it doesn't guarantee that's what you got back.

Oooo, good call. I completely glossed over checking the crt.

Upon checking the CRT, there is no SAN section. Which means there might be something wrong with my signing cert. Which would also explain why using the elasticsearch-certutil failed as well.

I will go and start troubleshooting my signing process and report back what I find.

Good catch, and thank you.

Hi I am with the same problem.

Whats is missing to configure ?

==== error in log in elasticsearch.log

[2019-05-10T18:25:30,233][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [F6wWlXU] http client did not trust this server's certificate, closing connection [id: 0xdd8afb0a, L:0.0.0.0/0.0.0.0:9200 ! R:/187.61.254.146:60269]

===== My config is

node.name: node-1

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: certs/elastic-certificates.p12
xpack.security.http.ssl.client_authentication: optional

Set a custom port for HTTP:

network.host: 0.0.0.0
http.port: 9200
transport.host: localhost
transport.tcp.port: 9300

=== Java console in browser
https://ec2-34-228-215-203.compute-1.amazonaws.com:9200/suppliers-dev/doc/_search:1
OPTIONS https://es.montafesta.com:9200/suppliers-dev/doc/_search net::ERR_CERT_AUTHORITY_INVALID

Your's may be a slightly different issue than mine. My first guess on yours is that you have your network.host set to 0.0.0.0.
You may want to set that to the DNS of the server / cert.

If you do have the same problem that I have, then it is because of an error with your cert. I have not fixed my problem yet, but I know what the problem is. I probably don't have time today to work on this, but I will most likely work on it on Monday.
I will post my findings once I find, fix and test things.

I'm glad we tracked it down. I would have been very worried if this error was coming up with a cert that had a SAN entry.

Hi @Felipe_Pina,
Your problem is actually a different one - can please start your own thread to discuss your issue.

That was my problem.

I fixed my issue by running the following commands:

openssl genrsa -out logstash-es02.key 4096
openssl req -new -nodes -key logstash-es01.key -out logstash-es02.csr -config es02.conf
openssl x509 -req -in logstash-es02.csr -CA CACert.pem -CAkey CAKey.pem -CAcreateserial -out logstash-es01.crt -days 900  -extfile es02.conf -extensions req_ext

Adding the -exfile es02.conf -extensions req_ext to the end of the x509 create solved the problem.

And Just to keep the full solution all in one post, here is the es02.conf again for the cert.

[req]
default_bits = 4096
default_md = sha512
distinguished_name = dn
req_extensions = req_ext
prompt = no

[ dn ]
C=US
ST=NY
L=Gothan
O=Example
OU=Batman Tech Support
emailAddress=devops@example.com
CN=logstash-es02.ec2.example.com

[ req_ext ]
subjectAltName = @alt_names
extendedKeyUsage = serverAuth,clientAuth

[ alt_names ]
DNS.1 = logstash-es02.ec2.example.com
IP.1 = 10.249.39.247
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.