Confused about setting up Kibana with X-Pack Security

I'm doing a trial of X-Pack but am struggling with setting up TLS. I followed the instructions on how to use X-Pack certutil to create a CA, then use that to create certs for each Elasticsearch node. However, when I get to configuring Kibana / Logstash after this I am lost.

I found a section in the config files that specifies a PEM file, but the X-Pack certutil does not create a PEM file; it creates a PKCS#12 keystore. What am I supposed to do to get Logstash and Kibana to be able to communicate with Elasticsearch once TLS is turned on?

Thank you

You can use the elasticsearch-certutil to create a pem formated certificate using the --pem option. See the docs here

For Kibana and Logstash, you just need to set their configuration to trust the singining ca cert. It's all layed out here

A couple things regarding this:

  1. I currently have Nginx set up in front of Kibana as a reverse proxy and configured TLS with that using LetsEncrypt. Am I correct that I don't need to configure Kibana's server.ssl?

  2. Is Kibana's elasticsearch.ssl.certificateAuthorities the location of the CA I created on the Elasticsearch node which I used to create the certs on each node?

  3. After creating the CA, I assume I copy it to my local computer somewhere and back it up so I can create more certs if I need more nodes?

I currently have Nginx set up in front of Kibana as a reverse proxy and configured TLS with that using LetsEncrypt. Am I correct that I don't need to configure Kibana's server.ssl?

If you are fine with the communications between your nginx proxy and Kibana being unencrypted, then yes. Kibana's server.ssl settings are for the http layer access to Kibana.

Is Kibana's elasticsearch.ssl.certificateAuthorities the location of the CA I created on the Elasticsearch node which I used to create the certs on each node?

Yes. Kibana communicates with Elasticsearch internally via http. Thus, it needs to be able to trust the certificate that Elasticsearch will present for TLS/SSL on the http layer.

After creating the CA, I assume I copy it to my local computer somewhere and back it up so I can create more certs if I need more nodes?

Yes. If you have configured your nodes to trust the CA signed certificates for the transport layer as we lay out in the documentation, then you'd need to use that CA certificate to create and sign new ones for the new nodes that you will create in the future so that they can join the cluster

Thanks! The one thing I'm still a bit unclear about is regarding elasticsearch.ssl.certificateAuthorities. when I originally tried this it created a PKCS#12 keystore that has a password. I was confused how Kibana would be able to do anything with that if I didn't give it the password to unlock it as well. If I tell certutil to create a PEM instead of a PKCS#12 keystore does that mean it won't have a password so Kibana can access it without needing one?

elasticsearch.ssl.certificateAuthorities

Does not support PKCS#12 or other types of truststores. It only accepts a path of PEM files of certificates or CA certificates that are to be trusted. Certificates are meant to be public and their PEM encoded representations are usually not password protected. certutil won't add a password in the certificate PEM file so you won't need to set a password in your kibana configuration.

Gotcha, so should the "Enable TLS for Elasticsearch" documentation be updated to include generating a PEM file using certutil?

Not sure which part of the documentation you refer to. I can understand it is unfortunate that you can't use PKCS#12 keystores as truststores in Kibana while these are the easiest way to setup TLS in Elasticsearch. We try to make it clear in the docs:

Elasticsearch TLS documentation contains examples of configuration using either keystores or PEM files.

Kibana TLS Documentation explicitly calls out that you need to use PEM files for the elasticsearch.ssl.certificateAuthorities setting.

elasticserach-certutil documentation contains examples for both generating PKCS#12 keystores and PEM files.

There is an ongoing effort to make our TLS configuration documentation easier to follow, we will take your feedback into consideration. Thanks!

I've configured Elasticsearch to use TLS using certutil to generate certs, then I set up Kibana using the docs page you sent, but it isn't connecting. From Kibana:

kibana[13599]: {"type":"log","@timestamp":"2018-11-29T20:30:03Z","tags":["warning","elasticsearch","admin"],"pid":13599,"message":"Unable to revive connection: https://outdomainname:9200/"}

Hi again ,

I shared a few links above so I'm not really sure which one you refer to. It's always better if you can share your actual configuration.

It will also be very helpful if you can share the logs from elasticsearch for us to see why the connection fails.

In Kibana, I am using Nginx as a reverse proxy and configured TLS using LetsEncrypt. Kibana is configured to serve on localhost, along with these custom settings:

elasticsearch.url: "https://es1.ourdomain.com:9200"
elasticsearch.username: "kibana"
elasticsearch.password: "notpostingthishere"
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/ca/ca.crt" ]
elasticsearch.ssl.verificationMode: certificate
xpack.security.encryptionKey: "removingthistoo"

Each Elasticsearch node is configured with:

cluster.name: production
node.name: es1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
discovery.zen.hosts_provider: ec2
discovery.zen.minimum_master_nodes: 2

xpack.security.enabled: true

xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /etc/elasticsearch/instance/instance.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/instance/instance.crt
xpack.security.transport.ssl.certificate_authorities: ["/etc/elasticsearch/ca/ca.crt"]

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /etc/elasticsearch/instance/instance.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/instance/instance.crt
xpack.security.http.ssl.certificate_authorities: ["/etc/elasticsearch/ca/ca.crt"]

I used certutil to create the CA on one Elasticsearch server, then generated the certs, SCPed the CA crt and key to my local computer, then SCPed them to the other two Elasticsearch nodes, used certutil on each of those to generate certs for the nodes using that CA, then deleted the ca.key from each Elasticsearch node and kept it on my local computer.

Is there something wrong with my config? Thank you.

Actually... it's working now. Somehow. Not sure what caused it to not work before. It would still be nice to confirm I configured it correctly though.

It would still be nice to confirm I configured it correctly though.

I don't see anything wrong in the config you shared.

Actually... it's working now. Somehow. Not sure what caused it to not work before.

The message said that Kibana could not communicate to the Elasticsearch node that you have pointed it to, on port 9200, using HTTP over TLS. There can be a number of reasons, the Elasticsearch node might have been down, the configured URL might have been wrong, the TLS configuration might have been wrong. There were for sure more indications both in the kibana.log before or after the line you posted above and in the elasticsearch.log of the node that you have pointed Kibana to.

Thanks! Three more questions:

  1. I was under the impression that private keys should remain on the computers they we're created on and never be transferred from them. I'm new to CAs, so is this not true for them? Is it better if I create the CA on my local machine and then transfer the node public and private keys to the server, or would it better to transfer the CA to the node and create the node certs there then delete the CA private key?

  2. Kibana has a config setting for the Elasticsearch SSL cert and key. This sounds like I should copy the exact same keypair from the master ES node and keep them on Kibana as well? Isn't that bad practice?

Hi there,

I was under the impression that private keys should remain on the computers they we're created on and never be transferred from them.

In general this is a very good practice. If this is the best practice or optimal practice for you, your organization and your use case depends on many factors and is not easy for anyone on these forums to make the decision for you. You need to factor in your organizations security policy, your systems threat model, or in the simplest form decide for yourself what are the trade-offs between usability and security you are willing to make.

I Assume you are talking about elasticsearch.ssl.certificate and elasticsearch.ssl.key. No these are not meant to be the same keypair that you use in any of the Elasticsearch nodes.
These two can be set if you want kibana to perform TLS client authentication when communicating with Elasticsearch ( instead of using elasticsearch.username and elasticsearch.password). In order to set this up you need to enable a PKI authentication realm in Elasticsearch and create a key and certificate where the certificate is trusted by Elasticsearch.

Great, thank you for your help!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.