Setup SSL via load balancer on different server

I have a classic setup with a Nginx load balancer that distributes traffic through all my other non-public servers. Each server contains only one service e.g. Postgres or Elasticsearch.

I just discovered Elasticsearch and have some pain understanding why there is an automatic SSL config in recent versions. Why not just let the load balancer take care of that?

Here's my config:

upstream elastic {
        #es01
        server     10.100.6.10:9200;
        #es02
    	server     10.100.6.11:9200;
        keepalive  15;
}

server {
        listen          80;
        listen          [::]:80;
        server_name     elastic.mydom.com;
        return 301      https://$host$request_uri;
}

server {
        listen          443 ssl http2;
        listen          [::]:443 ssl http2;
        server_name     elastic.mydom.com;

        # LOGS
        access_log      /var/log/nginx/elastic.access.log;
        error_log       /var/log/nginx/elastic.error.log;

    	# GZIP
        gzip on;

        # SSL
        ssl_certificate         /path-to-cert-on-my-nginx-server/fullchain.pem;
        ssl_certificate_key     /path-to-key-on-my-nginx-server/privkey.pem;
        ssl_protocols           TLSv1.2 TLSv1.3;
    
        location / {
                proxy_pass      http://elastic;
                proxy_redirect  off;
                # and so on......
        }
        
        # and so on......
}

This is very convenient and makes adding a new server (or a new ES node?) a breeze, but everything I've read about Elasticsearch's SSL config confused me. So... is this enough?

If yes, can I just delet the entire "BEGIN SECURITY AUTO CONFIGURATION" part in elasticsearch.yml or is it still important for configuring the multi-node cluster?

If no, what's the next step?

Because most people don't use a load balancer with Elasticsearch. You can, but it's not the normal pattern. The automatic SSL configuration is so that every installation of Elasticsearch is, by default, secure - you cannot have a secure cluster if it is accessed over cleartext http connections. We do not assume that you have any infrastructure other than what Elasticsearch provides automatically.

Elasticsearch client libraries have builtin behaviour to handle new nodes. When they connect to a node they discover ("sniff") all the nodes in the cluster, and can be configured to failover to another node if the current node fails. This allows them to have node-affinity (which can be helpful for caching) but also resilience.
A load balancer can also be effective, but it is not strictly needed if you use smart clients.

can I just delet the entire "BEGIN SECURITY AUTO CONFIGURATION" part in elasticsearch.yml

You can, but you'll also lose SSL between nodes, and that is important if you want to have a multi-node cluster.
Really, it just sounds like you want to change xpack.security.http.ssl.enabled to false.

All right I think I got it.

With the help of your comment and many many tutorials, I think I understood how authorization of requests to elasticsearch work. Please, correct me if I'm wrong!

In fact, I don't need to expose the Elasticsearch server on a public IPv4 at all. It should only be requestable via its internal ip in its environment, so only by the other servers of the environment. This is where communication between the ES server and all third-party clients happens, but never on the public internet (unless using non-self-hosted services).

One of the clients would be Kibana which should be hosted on a different server of the same environment. For the requests from Kibana to ES to be authorized, Kibana should provide in its requests the "http_ca.crt" that was automatically generated when we instaleld ES (This is where I was totally lost!). To do that:

  1. Download http_ca.crt from ES's server located in /etc/elasticsearch/certs
  2. Upload that exact same cert on the Kibana server
  3. Precise the location in kibana.yml with elasticsearch.ssl.certificateAuthorities
    3.1 Example: elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/certs/http_ca.crt" ]
  4. Ready to go, requests are automatically and securely authorized by ES

Now, let's say that I would also like to request data from ES from my Node.js server:

  1. Download http_ca.crt from ES's server located in /etc/elasticsearch/certs
  2. Upload that exact same cert on the Node.js server
  3. Install the official Node.js client @ elastic/elasticsearch
    3.1. In the client initialization, specify the location of http_ca.crt
  4. Ready to go, requests are automatically and securely authorized by ES

Basically, for this to work we still need xpack.security.http.ssl.enabled: true , but we don't need to use a load balancer for that as we don't need to expose ES at all.

However, we still want to access the Kibana console on the public internet (e.g. https://kibana.mydomain.com) or send requests to the REST endpoints of our Node.js server. This is where we would use Nginx to expose them securely using a different TLS certificate (that we are probably already using to expose our other servers) or adding basic auth (which is already added to Kibana by default).

And to make it safe for production, just follow the important settings and system config docs.

So this is how it works, right?

No, this is not correct.

The http_ca.crt is effectively public information. It is the certificate of the CA that issued the Elasticsearch cluster's HTTP server certificate. It is not a secret, and it is not a credential.
Kibana needs a copy of it so that it can trust the HTTP service on the Elasticsearch nodes (by default, the HTTP service is not signed by a public CA and will not be automatically trusted).

Kibana has its own set of credentials for connecting to Elasticsearch. The recommendation is to use what is referred to as a "service token". If you follow the automatic enrollment process Kibana will request a service token from Elasticsearch and save it locally so that you do not need to perform any manual configuration.

Steps 1-3 are correct, but you can also use the CA fingerprint instead of copying the whole ".crt" file around, if you would prefer.

As with Kibana, step 4 is not correct. The HTTP CA is not a credential, and it will not authenticate your node.js service to Elasticsearch.
Instead, you need to use one of the authentication methods described in the docs. Typically either an API Key or Basic Authentication.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.