Good news: I got it working! I have one additional question but first I'll provide my setup just incase anyone happens to stumble upon this. If you just want to see the question skip to the end.
VM Memory Count
First, you have to sysctl -w vm.max_map_count=262144
If you don't do this your containers will crash.
Elasticsearch & Kibana
I installed Elasticsearch and kibana in docker containers as shown here.
However, instead of running the containers with the -it
flag, which runs them in the terminal, you should instead run them with the -d
flag which will run them in the background.
Then, to get the enrollment key you can run a docker logs es01
If that doesn't work you can run the following commands to get the enrollment token and the elastic password.
docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
Suricata
Once Elasticsearch and Kibana are up and running, you can install suricata. I did that here. One note: you want to use the -v
option as it's shown because that will save the eve.json logs to your local box (in my case my ubuntu server). This allows your logs to persist through container failurs/restarts/shutdowns/etc.
Filebeat (Skip to the edit below if you want to install filebeat as a service so it will run persitently)
Now for filebeat. The documentation on installing filebeat is here. You can run filebeat in a container but since the logs are saved to the localhost I decided to run filebeat directly on the ubuntu server for now.
When configuring filebeat, make sure you edit the filebeat.yml and pass it a username and password for elastic as well as make sure you specify the path as https and enable ssl.
To generate the fingerprint needed for ssl. Refer to this documentation specifically: openssl x509 -fingerprint -sha256 -in config/certs/http_ca.crt
You the need to run ./filebeat setup -e
Once that is done you can do a ./filebeat modules enable suricata
You then need to cd into modules.d and edit the suricata.yml to enable it and give it the path of your eve.json (the path on your local host).
Once that's done you should be able to start it up and it should run!
Now to my question:
When I run filebeat, it runs in the terminal (even if I try to run it in the background). This means that when my ssh session terminates filebeat stops. It's not an issue right now as when I start it back up it is able to send all of the logs it had missed from eve.json on to Elasticsearch. But I'm wondering if there's a way to run filebeat as a service? Or at least have it run in the background and persist when I close my ssh session?
The only other workaround I can think of is to make a cron job to run it.
Edit
Got it reinstalled as a deb package. Just wanted to update this for anyone that finds this in the future
My biggest frustration when troubleshooting issues I was having was finding all these posts that were abandoned when the answer was found. I want to make sure my process is documented.
Filebeat as a service
Now for filebeat. The documentation on installing filebeat is here. You can run filebeat in a container but since the logs are saved to the localhost I decided to run filebeat directly on the ubuntu server for now.
The filebeat.yml should be in /etc/filebeat
When configuring filebeat, make sure you edit the filebeat.yml and pass it a username and password for elastic as well as make sure you specify the path as https and enable ssl.
To generate the fingerprint needed for ssl. Refer to this documentation specifically: openssl x509 -fingerprint -sha256 -in config/certs/http_ca.crt
You then need to cd into modules.d and edit the suricata.yml to enable it (set it to true) and give it the path of your eve.json (the path on your local host).
You now have to run filebeat setup -e
to set up the index template and some dashboards.
Once thats done you should be able to start the service. On ubuntu the command is sudo service filebeat start
If you cat eve.json and there's logs in there you should now be seeing those logs in Kibana.