All the SSL certificates have been created using /bin/x-pack/certutil (both CA and cert are in P12 format).
The current configuration looks something like:
[2019-08-05T12:51:23,484][INFO ][o.e.n.Node ] [E5vVSdB] started
.[2019-08-05T12:51:24,396][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [E5vVSdB] caught exception while handling client http traffic, closing connection [id: 0x8d924f53, L:0.0.0.0/0.0.0.0:9200 ! R:/0:0:0:0:0:0:0:1:42514]
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLException: Received fatal alert: unknown_ca
Caused by: javax.net.ssl.SSLException: Received fatal alert: unknown_ca
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208) ~[?:?]
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1647) ~[?:?]
Says that a connection from a client to Elasticsearch on the http layer is closed because the client threw an error saying:
Received fatal alert: unknown_ca
Have you configured all your clients to trust the CA certificate that you have generated and used for the signing the certificate that the Elasticsearch http layer uses? If not, they can't trust the certificate and the connections will fail ( which is what happens here ) . Kibana, logstash, beats and other clients communicate with Elasticsearch using the http layer so when you enable TLS, you need to configure all of them to correctly and securely connect to Elasticsearch.
[2019-08-05T12:56:53,053][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [E5vVSdB] caught exception while handling client http traffic, closing connection [id: 0x31ed32ed, L:0.0.0.0/0.0.0.0:9200 ! R:/10.131.12.1:44292]
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f5f636c75737465722f6865616c74683f7072657474793d7472756520485454502f312e310d0a4163636570743a202a2f2a0d0a417574686f72697a6174696f6e3a204261736963205a57786863335270597a706a614746755a3256745a513d3d0d0a557365722d4167656e743a206375726c2f372e32392e300d0a582d466f727761726465642d466f723a2031302e39382e36302e3234380d0a486f73743a206c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f790d0a582d466f727761726465642d486f73743a206c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f790d0a582d466f727761726465642d506f72743a2038300d0a582d466f727761726465642d50726f746f3a20687474700d0a466f727761726465643a20666f723d31302e39382e342e3131363b686f73743d6c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f793b70726f746f3d687474700d0a582d466f727761726465642d466f723a2031302e39382e342e3131360d0a0d0a
Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f5f636c75737465722f6865616c74683f7072657474793d7472756520485454502f312e310d0a4163636570743a202a2f2a0d0a417574686f72697a6174696f6e3a204261736963205a57786863335270597a706a614746755a3256745a513d3d0d0a557365722d4167656e743a206375726c2f372e32392e300d0a582d466f727761726465642d466f723a2031302e39382e36302e3234380d0a486f73743a206c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f790d0a582d466f727761726465642d486f73743a206c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f790d0a582d466f727761726465642d506f72743a2038300d0a582d466f727761726465642d50726f746f3a20687474700d0a466f727761726465643a20666f723d31302e39382e342e3131363b686f73743d6c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f793b70726f746f3d687474700d0a582d466f727761726465642d466f723a2031302e39382e342e3131360d0a0d0a
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1106) ~[?:?]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1162) ~[?:?]
As the error message says, this is a client attempting to connect to Elasticsearch on the http layer with http, while Elasticsearch expects to get connections over http over TLS ( https ) , and thus throws an error. Actually, if you hex decode the string you see there, it's a request to
Thank you @ikakavas.
For now, there are no clients running on my environment (Kibana and fluentd have been scaled down).
Those errors were being logged once the node started up without even sending a single request, probably being sent by system monitoring watches.
Is there any specific setting to be performed for watches ? Shouldn't it have been updated implicitly once I've configured TLS over HTTP ?
It's the name of the exporter. See our docs on exporters. You only need to care about this if you are using an http exporter that connects to your cluster for which you enabled TLS.
You don't need that, unless you want watcher to perform client TLS authentication, this is described in the docs I shared with you:
. A private key and certificate are optional and would be used if the server requires client authentication for PKI authentication.
The keystore contains a private key and certificate.
However, I'm still getting those unknown_ca errors:
It can be I misread the exception above and it is actually Elasticsearch that complains about the fact that it can't trust the certificate that a client is sending as part of the mutual TLS authentication, and not the other way around.
@ikakavas I need that http.ssl.keystore property since we have a watch which checks for cluster health and triggers alert if not green.
Anyway, I tried removing keystore.path as you mentioned, but it didn't work.
About: on port 45456 that attempts to connect to Elasticsearch
lsof command is not available on any node nor on my local machine, and the ID: 45456 keeps on changing (a new number with every error logged even when no client is running).
is not a valid option. This PKCS12 contains a CA certificate and a server certificate and key that Elasticsearch nodes (as the server ) use for TLS. You can't simply use the same certificate and key for the client side ( watcher ) in a mutual TLS authentication. The client needs to have its own certificate and key if you want to perform client TLS authentication.
You'd need to create a separete PKCS12 keystore for the watch with a key and a certificate that is signed by the same CA i.e. with
and use that resulting p12 file as the xpack.http.ssl.keystore.path only ( xpack.http.ssl.truststore.path should remain as is as the http client executing the watch needs to be able to trust the Elasticsearch node certificate) .
Do you need/want to use TLS client authentication though ? Is the above configured on purpose? You could also pass the credentials to connect to Elasticserach in the HTTP input definition.
Could you share your watch input so that we can see how it currently looks like?
I wasn't sure that you were aware what was trying to connect to Elasticsearch, this is why I suggested that, it looks like we now know that this is a watch hitting the health endpoint so that's irrelevant.
and I don't really need http.ssl.keystore.path.
I tried updating my config as below and restarted the cluster, but still hitting the same unknown_ca issue:
@ikakavas I guess the issue was on route created on Openshift to access the ES cluster.
I can know curl to the ES cluster and get response (using -k switch although):
Hi @ikakavas , we have created some routes on our Openshift environment to connect with Kibana/ Elasticsearch. Earlier the route was using Edge TLS termination and router was not letting the requests through, which when changed to TLS: Passthrough; works fine now.
I tried removing xpack.http.ssl.keystore.path as you mentioned, but still got the same issue on server restart:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.