Secure connection on HTTP layer

I currently have 3 node ES cluster running on 6.2.3 version with transport security configured successfully.
Now I'm planning to secure the HTTP client connections to my cluster by following steps mentioned in :
https://www.elastic.co/guide/en/elasticsearch/reference/6.2/configuring-tls.html#tls-http

All the SSL certificates have been created using /bin/x-pack/certutil (both CA and cert are in P12 format).
The current configuration looks something like:

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.client_authentication: optional
xpack.security.http.ssl.keystore.path: elastic-certificates.p12
xpack.security.http.ssl.truststore.path: elastic-certificates.p12

However, the ES nodes are throwing different errors related to SSL certificates:

[2019-08-05T12:51:23,484][INFO ][o.e.n.Node               ] [E5vVSdB] started
.[2019-08-05T12:51:24,396][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [E5vVSdB] caught exception while handling client http traffic, closing connection [id: 0x8d924f53, L:0.0.0.0/0.0.0.0:9200 ! R:/0:0:0:0:0:0:0:1:42514]
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLException: Received fatal alert: unknown_ca

Caused by: javax.net.ssl.SSLException: Received fatal alert: unknown_ca
	at sun.security.ssl.Alerts.getSSLException(Alerts.java:208) ~[?:?]
	at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1647) ~[?:?]

After logging bulk of similar errors, it changed to:

[2019-08-05T12:56:53,053][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [E5vVSdB] caught exception while handling client http traffic, closing connection [id: 0x31ed32ed, L:0.0.0.0/0.0.0.0:9200 ! R:/10.131.12.1:44292]
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f5f636c75737465722f6865616c74683f7072657474793d7472756520485454502f312e310d0a4163636570743a202a2f2a0d0a417574686f72697a6174696f6e3a204261736963205a57786863335270597a706a614746755a3256745a513d3d0d0a557365722d4167656e743a206375726c2f372e32392e300d0a582d466f727761726465642d466f723a2031302e39382e36302e3234380d0a486f73743a206c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f790d0a582d466f727761726465642d486f73743a206c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f790d0a582d466f727761726465642d506f72743a2038300d0a582d466f727761726465642d50726f746f3a20687474700d0a466f727761726465643a20666f723d31302e39382e342e3131363b686f73743d6c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f793b70726f746f3d687474700d0a582d466f727761726465642d466f723a2031302e39382e342e3131360d0a0d0a

Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f5f636c75737465722f6865616c74683f7072657474793d7472756520485454502f312e310d0a4163636570743a202a2f2a0d0a417574686f72697a6174696f6e3a204261736963205a57786863335270597a706a614746755a3256745a513d3d0d0a557365722d4167656e743a206375726c2f372e32392e300d0a582d466f727761726465642d466f723a2031302e39382e36302e3234380d0a486f73743a206c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f790d0a582d466f727761726465642d486f73743a206c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f790d0a582d466f727761726465642d506f72743a2038300d0a582d466f727761726465642d50726f746f3a20687474700d0a466f727761726465643a20666f723d31302e39382e342e3131363b686f73743d6c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f793b70726f746f3d687474700d0a582d466f727761726465642d466f723a2031302e39382e342e3131360d0a0d0a
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1106) ~[?:?]
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1162) ~[?:?]

Any idea what could be wrong here and how can I make sure my connections to ES are secured ?

[2019-08-05T12:51:23,484][INFO ][o.e.n.Node               ] [E5vVSdB] started
.[2019-08-05T12:51:24,396][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [E5vVSdB] caught exception while handling client http traffic, closing connection [id: 0x8d924f53, L:0.0.0.0/0.0.0.0:9200 ! R:/0:0:0:0:0:0:0:1:42514]
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLException: Received fatal alert: unknown_ca

Caused by: javax.net.ssl.SSLException: Received fatal alert: unknown_ca
	at sun.security.ssl.Alerts.getSSLException(Alerts.java:208) ~[?:?]
	at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1647) ~[?:?]

Says that a connection from a client to Elasticsearch on the http layer is closed because the client threw an error saying:

Received fatal alert: unknown_ca

Have you configured all your clients to trust the CA certificate that you have generated and used for the signing the certificate that the Elasticsearch http layer uses? If not, they can't trust the certificate and the connections will fail ( which is what happens here ) . Kibana, logstash, beats and other clients communicate with Elasticsearch using the http layer so when you enable TLS, you need to configure all of them to correctly and securely connect to Elasticsearch.

[2019-08-05T12:56:53,053][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [E5vVSdB] caught exception while handling client http traffic, closing connection [id: 0x31ed32ed, L:0.0.0.0/0.0.0.0:9200 ! R:/10.131.12.1:44292]
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f5f636c75737465722f6865616c74683f7072657474793d7472756520485454502f312e310d0a4163636570743a202a2f2a0d0a417574686f72697a6174696f6e3a204261736963205a57786863335270597a706a614746755a3256745a513d3d0d0a557365722d4167656e743a206375726c2f372e32392e300d0a582d466f727761726465642d466f723a2031302e39382e36302e3234380d0a486f73743a206c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f790d0a582d466f727761726465642d486f73743a206c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f790d0a582d466f727761726465642d506f72743a2038300d0a582d466f727761726465642d50726f746f3a20687474700d0a466f727761726465643a20666f723d31302e39382e342e3131363b686f73743d6c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f793b70726f746f3d687474700d0a582d466f727761726465642d466f723a2031302e39382e342e3131360d0a0d0a

Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f5f636c75737465722f6865616c74683f7072657474793d7472756520485454502f312e310d0a4163636570743a202a2f2a0d0a417574686f72697a6174696f6e3a204261736963205a57786863335270597a706a614746755a3256745a513d3d0d0a557365722d4167656e743a206375726c2f372e32392e300d0a582d466f727761726465642d466f723a2031302e39382e36302e3234380d0a486f73743a206c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f790d0a582d466f727761726465642d486f73743a206c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f790d0a582d466f727761726465642d506f72743a2038300d0a582d466f727761726465642d50726f746f3a20687474700d0a466f727761726465643a20666f723d31302e39382e342e3131363b686f73743d6c6f6767696e672d65732d6c6f6767696e672e6374742e656e762e7061706572626f793b70726f746f3d687474700d0a582d466f727761726465642d466f723a2031302e39382e342e3131360d0a0d0a
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1106) ~[?:?]
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1162) ~[?:?]

As the error message says, this is a client attempting to connect to Elasticsearch on the http layer with http, while Elasticsearch expects to get connections over http over TLS ( https ) , and thus throws an error. Actually, if you hex decode the string you see there, it's a request to

GET /_cluster/health?pretty=true HTTP/1.1

made with curl

Thank you @ikakavas.
For now, there are no clients running on my environment (Kibana and fluentd have been scaled down).
Those errors were being logged once the node started up without even sending a single request, probably being sent by system monitoring watches.
Is there any specific setting to be performed for watches ? Shouldn't it have been updated implicitly once I've configured TLS over HTTP ?

No.

TLS settings for monitoring are described here: Monitoring settings in Elasticsearch | Elasticsearch Guide [7.3] | Elastic

TLS settings for watcher are described here: Watcher settings in Elasticsearch | Elasticsearch Guide [7.3] | Elastic

Thank you @ikakavas for pointg out those docs.
Any idea what $NAME refers to here:

xpack.monitoring.exporters.$NAME.ssl.keystore.path

It's the name of the exporter. See our docs on exporters. You only need to care about this if you are using an http exporter that connects to your cluster for which you enabled TLS.

Ahh, got it. We are not using any exporters, so just made the below changes as per https://www.elastic.co/guide/en/elasticsearch/reference/7.3/notification-settings.html#ssl-notification-settings for PKCS#12 files:

xpack.http.ssl.keystore.path: elastic-certificates.p12
xpack.http.ssl.truststore.path: elastic-certificates.p12

However, I'm still getting those unknown_ca errors:

[2019-08-05T15:09:48,698][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [E5vVSdB] caught exception while handling client http traffic, closing connection [id: 0xc148de09, L:0.0.0.0/0.0.0.0:9200 ! R:/0:0:0:0:0:0:0:1:45456]
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLException: Received fatal alert: unknown_ca

You don't need that, unless you want watcher to perform client TLS authentication, this is described in the docs I shared with you:

. A private key and certificate are optional and would be used if the server requires client authentication for PKI authentication.

The keystore contains a private key and certificate.

However, I'm still getting those unknown_ca errors:

It can be I misread the exception above and it is actually Elasticsearch that complains about the fact that it can't trust the certificate that a client is sending as part of the mutual TLS authentication, and not the other way around.

Can you see if removing

xpack.http.ssl.keystore.path: elastic-certificates.p12

fixes that ?

Otherwise, please share your exact configuration from all your nodes.

BTW, it's something from localhost

L:0.0.0.0/0.0.0.0:9200 ! R:/0:0:0:0:0:0:0:1:45456

on port 45456 that attempts to connect to Elasticsearch, if it helps you figuring out what it is or pinpointing it with the help of lsof -i :45456

@ikakavas I need that http.ssl.keystore property since we have a watch which checks for cluster health and triggers alert if not green.
Anyway, I tried removing keystore.path as you mentioned, but it didn't work.

My current configuration looks like:

cluster:
  name: ${CLUSTER_NAME}

node:
  master: true
  data: true

discovery.zen.minimum_master_nodes: ${NODE_QUORUM}

network:
  host: 0.0.0.0

cloud:
  kubernetes:
    service: ${SERVICE_DNS}
    namespace: ${NAMESPACE}

discovery.zen.hosts_provider: kubernetes


path:
  data: /elasticsearch/persistent/${CLUSTER_NAME}/data
  logs: /elasticsearch/${CLUSTER_NAME}/logs

gateway:
  expected_master_nodes: ${NODE_QUORUM}
  recover_after_nodes: ${RECOVER_AFTER_NODES}
  expected_nodes: ${RECOVER_EXPECTED_NODES}
  recover_after_time: ${RECOVER_AFTER_TIME}

indices.breaker.fielddata.limit : 85%
indices.fielddata.cache.size: 9GB

xpack.http.ssl.verification_mode: certificate

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.client_authentication: optional
xpack.security.http.ssl.keystore.path: elastic-certificates.p12
xpack.security.http.ssl.truststore.path: elastic-certificates.p12

xpack.http.ssl.truststore.path: elastic-certificates.p12
xpack.ssl.client_authentication: optional
xpack.ssl.truststore.path: elastic-certificates.p12

And I'm still receiving unknown_ca errors :frowning:

About:
on port 45456 that attempts to connect to Elasticsearch

lsof command is not available on any node nor on my local machine, and the ID: 45456 keeps on changing (a new number with every error logged even when no client is running).

Apologies, I did not see the

xpack.security.http.ssl.client_authentication: optional 

in your settings.

Using elastic-certificates.p12 for watcher to connect to Elastic

xpack.http.ssl.keystore.path: elastic-certificates.p12

is not a valid option. This PKCS12 contains a CA certificate and a server certificate and key that Elasticsearch nodes (as the server ) use for TLS. You can't simply use the same certificate and key for the client side ( watcher ) in a mutual TLS authentication. The client needs to have its own certificate and key if you want to perform client TLS authentication.

You'd need to create a separete PKCS12 keystore for the watch with a key and a certificate that is signed by the same CA i.e. with

bin/elasticsearch-certutil cert --ca elastic-certificates.p12

and use that resulting p12 file as the xpack.http.ssl.keystore.path only ( xpack.http.ssl.truststore.path should remain as is as the http client executing the watch needs to be able to trust the Elasticsearch node certificate) .

Do you need/want to use TLS client authentication though ? Is the above configured on purpose? You could also pass the credentials to connect to Elasticserach in the HTTP input definition.

Could you share your watch input so that we can see how it currently looks like?

I wasn't sure that you were aware what was trying to connect to Elasticsearch, this is why I suggested that, it looks like we now know that this is a watch hitting the health endpoint so that's irrelevant.

Agreed, the cluster health check watcher uses:

"http": {
    "request": {
        "host": "localhost",
        "port": 9200,
        "path": "/_cluster/health",
        "auth": {
            "basic": {
                "username": "kibana_watcher",
                "password": "password"
            }
        }
    }
}

and I don't really need http.ssl.keystore.path.
I tried updating my config as below and restarted the cluster, but still hitting the same unknown_ca issue:

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: client-certificate.p12
xpack.security.http.ssl.truststore.path: elastic-certificates.p12

#xpack.http.ssl.verification_mode: certificate
#xpack.http.ssl.keystore.path: client-certificate.p12
#xpack.http.ssl.truststore.path: elastic-certificates.p12

which was also coming when xpack.http.ssl... properties were enabled with above values.
Client certificate was created using:

/bin/x-pack/certutil cert --ca /etc/elasticsearch/elastic-stack-ca.p12

since there is no elasticsearch-certutil in /bin directory (for ES 6.2.3).

May be I should try doing something like: https://www.server-world.info/en/note?os=CentOS_7&p=elasticstack6&f=12

@ikakavas I guess the issue was on route created on Openshift to access the ES cluster.
I can know curl to the ES cluster and get response (using -k switch although):

$ curl -u ayush.b.mathur https://logging-es-logging.ctt.env.paperboy -k
Enter host password for user 'ayush.b.mathur':
{
  "name" : "vQVCPf1",
  "cluster_name" : "logging-es",
  "cluster_uuid" : "clusteruuid",
  "version" : {
    "number" : "6.2.3",
    "build_hash" : "somehash",
    "build_date" : "2018-03-13T10:06:29.741383Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

Although I can still see unknown_ca errors in ES pods. The current configuration looks like:

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: elastic-clients.p12
xpack.security.http.ssl.truststore.path: elastic-certificates.p12

xpack.http.ssl.verification_mode: certificate
xpack.http.ssl.keystore.path: elastic-clients.p12
xpack.http.ssl.truststore.path: elastic-certificates.p12

Any further ideas how I can resolve those errors (coming due to system monitoring watches) ?

I'm sorry but I don't follow this sentence. Could you elaborate?

You don't need

xpack.http.ssl.keystore.path: elastic-clients.p12

for sure as no key/certificate should be used for client TLS authentication. I would expect that removing this will resolve the issue.

Is this the whole configuration? Could there be a *.client_authentication setting left in your elasticsearch.yml ?

Hi @ikakavas , we have created some routes on our Openshift environment to connect with Kibana/ Elasticsearch. Earlier the route was using Edge TLS termination and router was not letting the requests through, which when changed to TLS: Passthrough; works fine now.

I tried removing xpack.http.ssl.keystore.path as you mentioned, but still got the same issue on server restart:

[2019-08-06T10:15:13,826][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [vQVCPf1] publish_address {ES_IP:9200}, bound_addresses {[::]:9200}
[2019-08-06T10:15:13,827][INFO ][o.e.n.Node               ] [vQVCPf1] started
.[2019-08-06T10:15:14,787][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [vQVCPf1] caught exception while handling client http traffic, closing connection [id: 0x0d4363df, L:0.0.0.0/0.0.0.0:9200 ! R:/0:0:0:0:0:0:0:1:34910]
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLException: Received fatal alert: unknown_ca

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.