Ssl setup for filebeat and logstash in docker getting desperate now

Morning everyone.

im getting desperate now so i really need your help please. I cannot find a guide anywhere that is for a production setup , where you have nodes and beats running on different servers, it all seems to be locahost in examples, and i cant seem to make the successful jump to setup my cluster correctly.

Ive been battling for over a week trying to get a working SSL setup in my Elastic stack but i cant seem to get a working combination of certs created.

My Linux based cluster is this. (ive changed some details slightly for security/common sense)

ES, kibana and Logstash running on a single host (hosted in linode) outside of our internal network where our beats will be running.
external ip = 139.100.100.100
dns host name = monitor.myelastic.com
server name = flanders.mydomain.com
Firewalled off so it isnt accessible by anyone outside our network.

Logstash running on default port 5044.

Filebeat running on any of our internal linux servers to send logs to logstash.

This traffic is the only traffic going over the internet so we need SSL enabled.

I want to use elastic cert util to create the certs that will be used by filebeat to send log data to logstash with SSL enabled. I know best practice in some guides ive read is to use a 3rd party cert like lets encrypt but i was hoping using the inbuilt util it would be easier and its sufficient for us.

Im trying to come up with a valid instances.yml file to use with the util but im not sure what name, ip and dns details to use. Ive tied a few but nothing seems to work.

With the details ive provided could someone suggest a correct instance.yml i can use?

and also the correct filebeat.yml and maybe logstash.conf SSL lines so that they both talk to each other using these new certs.

elastic community...you are my only hope.

Hi @daverodgers

Did you look at the example instance file here

Should be straight forward share your anonymized one... For the https connection... That's how it usually works you show us :slight_smile

And show the command you ran to generate

Then test with curl to connect to elasticsearch?

Does logstash already connect to elasticsearch since on same server?

Do you need logstash? Logstash takes a bit more..

What version are you running?

Let's get curl over ssl working first...then move on to logstash.

Also, did you make sure you bound elasticsearch to the network?

You did not share your docker setup... So it's hard for us to help with specifics.... You told us your journey but did not provide much in the way of configs

1 Like

hi Stephen

thanks for taking the time to reply.

i'd like to attempt another go at creating my instances file. but if i may ask a question. If ES, logstash and kibana all exist on the same host, does the instances file only need one "name" entry under the instances section?

So in my case it would be

instances:
  - name: flanders
    ip: 
      - 139.100.100.100
    dns: 
      - monitor.myelastic.com
      - flanders.mydomain.com

Upto now i have been creating a "name" for all 3 apps but maybe ive been thinking about this all wrong.

I will try and answer some of your other questions:

Does logstash already connect to elasticsearch since on same server?

  • yes logstash is connecting to ES

Do you need logstash?

  • yes we want to manipulate some of the data before it goes to ES.

What version are you running?

  • latest version (as this is a brand new install)

I will try creating a cert based on the above example instances i gave until i get your response to that , and see if its works.

Just for info, this is purely a cert/ssl issue. If i disable SSL in my logstash conf file (input section) the data flows from filebeat, through LS and into ES perfectly fine. It only stops working when i try to enable SSL.

thanks

You need to add each dns that you are going to use, even if they are on the same server.

If you are going to access using dns1.domain and dns2.domain, then both need to be added to the certificate.

You need to provide context, share your logstash configuration, share your filebeat configuration, share the log errors that you are getting.

If things stops working when you enable SSL then your configuration is wrong or your certificate is wrong, but you need to share the errors you are getting and the configurations you are using.

Without this is pretty hard to troubleshoot.

1 Like

logstash config

input {
  beats {
    port => 5044
    ssl_enabled => true
    ssl_certificate_authorities => "certs/ca/ca.crt"
    ssl_certificate => "certs/newcerts/logstash/logstash.crt"
    ssl_key => "certs/ca/ca.key"
  }
}
output {
    elasticsearch {
        hosts => ["https://es01:9200"]
        user => "elastic"
        password => "password123"
        ssl_enabled => true
        ssl_certificate_authorities => "certs/ca/ca.crt"
       }
}

the filebeat.yml is

filebeat.inputs:
  - type: log
    paths:
      - /var/log/*.log
      - /var/log/messages

output.logstash:
  hosts: ["logstashserver-external-ip:5044"]
  ssl.enabled: true
  ssl.certificate_authorities: "certs/ca/ca.crt"
  ssl.certificate: "certs/ca/ca.crt"
  ssl.key: "certs/ca/ca.key" 

if i try a test curl command from inside the filebeat container

curl -v -k --cacert ./certs/ca/ca.crt https://logstashserver-external-ip:5044

i get the following response

* successfully set certificate verify locations:
*   CAfile: ./certs/ca/ca.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (OUT), TLS alert, decrypt error (563):
* error:0407E085:rsa routines:RSA_verify_PKCS1_PSS_mgf1:first octet invalid
* Closing connection 0
curl: (35) error:0407E085:rsa routines:RSA_verify_PKCS1_PSS_mgf1:first octet invalid

and in my filebeat container logs i see this error:

Failed to connect to backoff(async(tcp://logstash-external-ip:5044)): tls: invalid signature by the server certificate: crypto/rsa: verification error","service.name":"filebeat","ecs.version":"1.6.0"}

thanks

Both the response of your curl and the log from beats suggests that there is something wrong with your certificate.

Also, you should not use -k when validate the certificate with the curl command.

How did you create the certificate and key for the beats input?

Is the key in the pkcs8 format? It is required.

For some reason the Beats documentation does not have any example on how to generate the keys and certificates, but you can follow the Elastic Agent documentation on how to generate those keys and certificates, it is basically the same thing.

You can check the documentation here.

Basically you create a CA, then using this CA you create a client certificate in the pem format, on this certificate you do not specify any dns or ip address.

After that you create the server certificate, in this certificate you specify the dns and ip address that are valid, it is the one you create using the instances.yml file.

Then you convert the key to the pkcs8 format as it is required.

Hi leandro

I created the certs following the elastic guide on how to setup elk using docker compose. so it auto creates them in a container for me based on what is in my compose file.

so the key is n whatever format the elastic util creates by default. this is the line in the elastic guide/docker compose file that creates it:

bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;

i guess i will try and recreate my certs again and reset, ive changed so much trying to resolve this that it is very confusing.

The key needs to ben the pkcs8 format, it will not work if it is not in this format.

The elasticsearch-certutil tool does not create keys in this format, you need to convert using openssl as explained in the documentation linked.

hi

thanks for that info.

just to confirm, are we talking about the ca.key that is created? or the other .key files that get created in the ES / logstash and kibana folders?

and once converted, i presume i then need to manually copy them to the cert volume location so the containers can read them, and change any cfg file to reference the new keys (if the name changes)?

thanks again.

ok so i have created the pkcs8 key using the openssl command.

i have changed my filebeat.yml file to reference the new version of the key:

output.logstash:
  hosts: elk-logstash01-1:5044
  index: filebeat
  ssl.certificate_authorities: ["certs2/ca/ca.crt"]
  ssl.verification_mode: none
  ssl.certificate: "certs2/logstash/logstash.crt"
  ssl.key: "certs2/logstash/logstash.pkcs8.key"

in the filebeat logs i still see this:

"message":"Failed to connect to backoff(async(tcp://elk-logstash01-1:5044)): tls: invalid signature by the server certificate: crypto/rsa: verification error","service.name":"filebeat","ecs.version":"1.6.0"}

and

Attempting to reconnect to backoff(async(tcp://elk-logstash01-1:5044)) with 3 reconnect attempt

stephen

when i run

curl -v --cacert /var/lib/docker/volumes/elk_certs/_data/ca/ca.crt https://made-up-hostname.com:5044

i get this response:

* Connected to made-up-hostname.com (etxernal-server-ip) port 5044 (#0)
* ALPN: offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
*  CAfile: /var/lib/docker/volumes/elk_certs/_data/ca/ca.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN: server did not agree on a protocol. Uses default.
* Server certificate:
*  subject: CN=logstash
*  start date: Aug 19 21:06:50 2024 GMT
*  expire date: Aug 19 21:06:50 2027 GMT
*  subjectAltName: host "made-up-hostname.com" matched cert's "made-up-hostname.com"
*  issuer: CN=Elastic Certificate Tool Autogenerated CA
*  SSL certificate verify ok.
* using HTTP/1.x
> GET / HTTP/1.1
> Host: made-up-hostname:5044
> User-Agent: curl/7.88.1
> Accept: */*
> 
* TLSv1.3 (IN), TLS alert, bad certificate (554):
* OpenSSL SSL_read: OpenSSL/3.0.13: error:0A000412:SSL routines::sslv3 alert bad certificate, errno 0
* Closing connection 0
curl: (56) OpenSSL SSL_read: OpenSSL/3.0.13: error:0A000412:SSL routines::sslv3 alert bad certificate, errno 0

the bad cert error obviously matches the error i see in the logstash logs.
so i think my problem is the CA cert that was created has an issue :frowning:

but I created it using this command inside the docker compose file:

bin/elasticsearch-certutil ca --silent --pem -out config/certs2/ca.zip;
          unzip config/certs2/ca.zip -d config/certs2

any ideas?

I think i have fixed this, so dont spend much time replying (if anyone was planning to do so). I will update if i have indeed resolved it incase it helps anyone else.

to resolve this i recreated all the certs for CA and nodes using these commands. note these were inside my docker compose file, in a setup container that creates the certs first before starting up the ELK related containers. The docker compose creates a docker volume called certs where the new certs were copied to.
I also used an openssl command to convert the .key file that the elasticsearch util creates into the pkcs8 format, which seemed to be the last bit of the puzzle. (this is in a guide linked earlier in this thread)

bin/elasticsearch-certutil ca --silent --pem -out config/certs2/ca.zip;
          unzip config/certs2/ca.zip -d config/certs2;

bin/elasticsearch-certutil cert --silent --pem -out config/certs2/certs.zip --in config/certs2/instances.yml --ca-cert config/certs2/ca/ca.crt --ca-key config/certs2/ca/ca.key;
          unzip config/certs2/certs.zip -d config/certs2;

My filebeat.yml file looks like this:

filebeat.inputs:
- type: filestream
  id: default-filestream
  paths:
    - ingest_data/*.log
    - /var/log/*.log

output.logstash:
  hosts: elk-logstash01-1:5044
  index: filebeat
  ssl.certificate_authorities: ["config/certs2/ca/ca.crt"]
  ssl.verification_mode: none
  ssl.certificate: "config/certs2/logstash/logstash.crt"
  ssl.key: "config/certs2/logstash/logstash.pkcs8.key"

my logstash.conf file is this:

Input {
  beats {
    port => 5044
    ssl_enabled => true
    ssl_certificate_authorities => "config/certs2/ca/ca.crt"
    ssl_certificate => "config/certs2/logstash/logstash.crt"
    ssl_key => "config/certs2/logstash/logstash.pkcs8.key"
  }
}
output {
    elasticsearch {
        hosts => ["https://es01:9200"]
        user => "elastic"
        password => "****"
        ssl_enabled => true
        ssl_certificate_authorities => "config/certs2/ca/ca.crt"
       }
}

thanks to everyone who replied.

1 Like