im getting desperate now so i really need your help please. I cannot find a guide anywhere that is for a production setup , where you have nodes and beats running on different servers, it all seems to be locahost in examples, and i cant seem to make the successful jump to setup my cluster correctly.
Ive been battling for over a week trying to get a working SSL setup in my Elastic stack but i cant seem to get a working combination of certs created.
My Linux based cluster is this. (ive changed some details slightly for security/common sense)
ES, kibana and Logstash running on a single host (hosted in linode) outside of our internal network where our beats will be running.
external ip = 139.100.100.100
dns host name = monitor.myelastic.com
server name = flanders.mydomain.com
Firewalled off so it isnt accessible by anyone outside our network.
Logstash running on default port 5044.
Filebeat running on any of our internal linux servers to send logs to logstash.
This traffic is the only traffic going over the internet so we need SSL enabled.
I want to use elastic cert util to create the certs that will be used by filebeat to send log data to logstash with SSL enabled. I know best practice in some guides ive read is to use a 3rd party cert like lets encrypt but i was hoping using the inbuilt util it would be easier and its sufficient for us.
Im trying to come up with a valid instances.yml file to use with the util but im not sure what name, ip and dns details to use. Ive tied a few but nothing seems to work.
With the details ive provided could someone suggest a correct instance.yml i can use?
and also the correct filebeat.yml and maybe logstash.conf SSL lines so that they both talk to each other using these new certs.
Should be straight forward share your anonymized one... For the https connection... That's how it usually works you show us :slight_smile
And show the command you ran to generate
Then test with curl to connect to elasticsearch?
Does logstash already connect to elasticsearch since on same server?
Do you need logstash? Logstash takes a bit more..
What version are you running?
Let's get curl over ssl working first...then move on to logstash.
Also, did you make sure you bound elasticsearch to the network?
You did not share your docker setup... So it's hard for us to help with specifics.... You told us your journey but did not provide much in the way of configs
i'd like to attempt another go at creating my instances file. but if i may ask a question. If ES, logstash and kibana all exist on the same host, does the instances file only need one "name" entry under the instances section?
Upto now i have been creating a "name" for all 3 apps but maybe ive been thinking about this all wrong.
I will try and answer some of your other questions:
Does logstash already connect to elasticsearch since on same server?
yes logstash is connecting to ES
Do you need logstash?
yes we want to manipulate some of the data before it goes to ES.
What version are you running?
latest version (as this is a brand new install)
I will try creating a cert based on the above example instances i gave until i get your response to that , and see if its works.
Just for info, this is purely a cert/ssl issue. If i disable SSL in my logstash conf file (input section) the data flows from filebeat, through LS and into ES perfectly fine. It only stops working when i try to enable SSL.
You need to add each dns that you are going to use, even if they are on the same server.
If you are going to access using dns1.domain and dns2.domain, then both need to be added to the certificate.
You need to provide context, share your logstash configuration, share your filebeat configuration, share the log errors that you are getting.
If things stops working when you enable SSL then your configuration is wrong or your certificate is wrong, but you need to share the errors you are getting and the configurations you are using.
and in my filebeat container logs i see this error:
Failed to connect to backoff(async(tcp://logstash-external-ip:5044)): tls: invalid signature by the server certificate: crypto/rsa: verification error","service.name":"filebeat","ecs.version":"1.6.0"}
Both the response of your curl and the log from beats suggests that there is something wrong with your certificate.
Also, you should not use -k when validate the certificate with the curl command.
How did you create the certificate and key for the beats input?
Is the key in the pkcs8 format? It is required.
For some reason the Beats documentation does not have any example on how to generate the keys and certificates, but you can follow the Elastic Agent documentation on how to generate those keys and certificates, it is basically the same thing.
Basically you create a CA, then using this CA you create a client certificate in the pem format, on this certificate you do not specify any dns or ip address.
After that you create the server certificate, in this certificate you specify the dns and ip address that are valid, it is the one you create using the instances.yml file.
Then you convert the key to the pkcs8 format as it is required.
I created the certs following the elastic guide on how to setup elk using docker compose. so it auto creates them in a container for me based on what is in my compose file.
so the key is n whatever format the elastic util creates by default. this is the line in the elastic guide/docker compose file that creates it:
just to confirm, are we talking about the ca.key that is created? or the other .key files that get created in the ES / logstash and kibana folders?
and once converted, i presume i then need to manually copy them to the cert volume location so the containers can read them, and change any cfg file to reference the new keys (if the name changes)?
"message":"Failed to connect to backoff(async(tcp://elk-logstash01-1:5044)): tls: invalid signature by the server certificate: crypto/rsa: verification error","service.name":"filebeat","ecs.version":"1.6.0"}
and
Attempting to reconnect to backoff(async(tcp://elk-logstash01-1:5044)) with 3 reconnect attempt
I think i have fixed this, so dont spend much time replying (if anyone was planning to do so). I will update if i have indeed resolved it incase it helps anyone else.
to resolve this i recreated all the certs for CA and nodes using these commands. note these were inside my docker compose file, in a setup container that creates the certs first before starting up the ELK related containers. The docker compose creates a docker volume called certs where the new certs were copied to.
I also used an openssl command to convert the .key file that the elasticsearch util creates into the pkcs8 format, which seemed to be the last bit of the puzzle. (this is in a guide linked earlier in this thread)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.