Elasticsearch Certificate Requirements

I'm trying to figure out what types of certificates Elasticsearch will work with. I'd like to minimize the number of certs and the process for maintaining those certs for the Elastic Stack. I see that pem and PKCS12 are mentioned in the documentation but is that all that's supported, is there support for PKCS8?

TL;DR: Yes Elasticsearch supports both encrypted and unencrypted PKCS#8 encoded keys (using the PEM file format).

Longer Explanation:

Certificate format terminology is complicated and can get confusing.

I'll try to be as clear as I can below, but please ask if something doesn't make sense. Or you can skip it, because it's more detail than you probably need, but these topics frequently confuse people so I figured it was worth writing a proper answer...

Strictly speaking, you're not asking about "types of certificates" you're asking about file formats for certificates and keys.

The "certificate type" would actually be "X.509", which is largely redundant because 99.99+% of TLS traffic uses X.509 certificates and most people have never encountered any other type of certificate.

Within X.509 people sometimes talk about the cryptographic algorithms as the "certificate type", so you have "RSA" "DSA" and "ECDSA" key types, and "SHA-1", "SHA-2" and "SHA-256" hashes. A few years ago there was a push from browser vendors to stop trusting "SHA-1 certificates", by which they mean "X.509 certificates that use the SHA-1 hash as part of the signature algorithm".

I don't say any of that to be pedantic, but because it's helpful to be clear on the terminology while searching for answers, so you know what to search for and whether information is relevant or not.

We recommend RSA + SHA-256 certificates, but will support whatever algorithms are supported by your JVM.

PEM is a file format for representing a DER encoded cryptographic object in plain ASCII.
That probably doesn't mean very much, but the short of it is that a PEM file can contain a certificate, but it can also contain a private key, and on occasion other object types as well.
And when there are multiple different DER encodings for a particular object type (and this is true for private keys), it is possible to format each of them as PEM file. But, consequently if a piece of software claims to support "PEM", that gives you no guarantees about which object types & encodings it actually supports.

What this means is, if you have a file that starts with -----BEGIN <object type>----- and ends with -----END <object type>-----, then it's a PEM file, and Elasticsearch can probably handle it but it does depend slightly on what that <object type> is, and what algorithms are used within the object.

PKCS#8 is a standard for encoding private key information using DER. (Technically the standard is for an encoding using ASN.1, which can then be serialised using DER, but now we're getting into very low level semantics).

That means PKCS8 isn't technically a certificate type because it

  1. is about keys not certificates
  2. is an encoding format, not a "type" (different key types such as RSA, DSA, and ECDSA can all be encoded into PKCS8).

It also means that technically, PKCS#8 isn't even a file format, because the PKCS#8 standard just tells you how to turn a key into an ASN.1 structure, you still need a way to save that ASN.1 into a file format. And the most common format for that is ... PEM.

Elasticsearch can read PEM formated, PKCS#8 encoded private keys (with or without encryption). However, the PKCS#8 format is quite flexible, and so it's possible to create keys that Elasticsearch can read, but cannot actually use (e.g. because they use an encryption algorithm for which we do not have an implementation).
It is highly unlikely that you will have a PKCS#8 key that Elasticsearch cannot use, but it is possible.

5 Likes

Fantastic information. I'm familiar with most of the information provided, but I've never really put any thought into the data encoding.

The reason I ask the question is I am trying to write up my own documentation for securing the elastic stack. I know there is a blog that was put out but it's not to my liking. According to what I've read so far, I have to create a cert, then stick a copy in each application's config directory and then convert it to PKCS8 and ship that out to each beats agent. I'm trying to reduce this to as few steps and files as possible. If all the apps in the Elastic Stack can work with PKCS8, then, is there a reason not to do that?

There's so many variables in play that it's really hard to provide a recommendation that is right for everyone.

No, but PKCS#8 is a key standard, so the beats agents only need that if you want to use mutually authenticated TLS. If you're not trying to do mTLS/PKI then you definitely don't want to be passing private keys out to your beats agents - thye don't need one and you're probably giving them something that they shouldn't have access to.
If you do want mTLS, then you can absolutely use PKCS#8 for your keys, but whether you should depends on so many factors that I can't give you a useful answer without a much better understanding of your needs.

Longer Explanation

Our tutorials & basic tooling are focused on enabling TLS on the transport port which is used for communication between ES nodes. This is the only place where we mandate TLS and the documentation aims to make that step as easy as possible.
We default to using PKCS#12 because it means that there is a single file that contains everything you need (either 1 file for each node if you enable hostname verification, or 1 for the whole cluster if you turn that verification off).
For most people that's the simplest way to get TLS working within their cluster.

If you want to enable TLS (but not mutual TLS / PKI) on the REST (http) interface (which is a very good idea), then there are a few more variables to consider.

Firstly is what CA will sign the certificates. For tranport-port that's not a big issue because the only things that consume the certificates are your other nodes (*) so you have complete control over the trust settings.
But with HTTP, you might have all sorts of clients and they all need to connect to your cluster. That can include other stack components (like beats) but can also include official ES language clients (like NEST), or even simple HTTP utilities (like curl). Those clients will have a range of options for how to configure a trusted CA, and they might not even be compatible with one another. (e.g. OpenSSL based environments like Node.JS will need PEM. Java can read PEM CAs but it's not obvious how to do it, and it's easier to work with PKCS#12).

(* Or transport client, but you should use the High Level Rest Client instead)

The best option (in general - no single recommendation is best for everyone) is to have your HTTP certificate generated by a CA that is already trusted by most/all clients. That might be a public CA (like let's encrypt) or a corporate CA that is already trusted within your organisation.
If your JVMs and AMIs and Docker containers (etc) are all already configured to trust a specific corporate CA and you can get that CA to sign a certificate for your Elasticsearch REST interface, that covers 90% of the work. I would do that.
Then you are unlikely to need to configure anything except Elasticsearch, and for that you just take whatever cert format your CA provided and use it.

You only need to configure clients (like beats) if you want TLS on the HTTP interface, but don't have an existing, trusted CA that will sign your cert for you.
In that case, I would generate a new CA in PEM format (using elasticsearch-certutil ca --pem) and then use that CA to generate certificates for Elasticsearch's HTTP interface in PKCS#12 (using elasticsearch-certutil cert). The clients need the CA (and most of them will read PEM) and the Elasticsearch configuration is easy because it's still just 1 file (the .p12) to configure.

The question of rolling out PKCS#8 keys to each client (beats agent) only comes up if you want enable mutual authentication for TLS. Clients don't need a key in order to trust your cluster's HTTP certificate, they only need it if they want to provide their own certificate.
That's getting into a very specific security configuration. I can help you with it, but in order to offer helpful advice I'd need to know what you're trying to achieve, because it gets into conversations around whether you want separate certificates per agent, whether the certificate is the authenticated identity of the agent (PKI auth) or whether you want to use certificate and passwords, etc.

2 Likes

I guess I missed the part where mutual TLS was optional. So I just need certs for Elasticsearch and Kibana. Do I HAVE to configure certs for Logstash as well? I'm getting the process figured out and tested in a test environment but when I hit the prod environment it will be secured with a public CA cert.

It Depends.

You don't need certs for logstash to connect to ES.
If you have beats connecting to Logstash and want to do that of TLS, then you need certs on the Logstash side, and you might want to do mutual TLS for that, because it is the only support authentication scheme for Beats -> Logstash.

Ya, I guess I shoulda been more specific on the communication direction. It sounds like the documentation is geared more towards configuring end to end mTLS throughout the entire stack, but the only requirements, and the ones that matter for my particular use case, is securing Elasticsearch, where the data sits, and Kibana for client to server communication.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.