Handling permissions on logs and certificates with filebeat / logstash

Hey folks,

I'm currently setting up an Elastic Stack in my company using the official Docker containers.
There's one thing I didn't find a satisfying answer to so far. When I run Filebeat in a docker container, I need to mount the log files, which I want it to analyze as a volume. But by default the "filebeat" user does not have permission to read these files.

I don't want to run filebeat as root, even in the container, so I added a user called "filebeat" on the host and added it to the group "adm", which owns the files in my case. This allows filebeat to read the files.
However, this seems like a workaround. Is there a best practice for this issue?

The same problem arises, when I mount SSL certificates from the host to the container for encrypted communication between logstash and filebeat. The "filebeat" and "logstash" users in the container can't read the certificate files, so in this case I'd need to add them to the root group or change the permission to world readable. Both seems very unsatisfactory from a security perspective.

Since these issues also arise when running elk/filebeat directly on the host, without a docker container, I assume there must be some sort of best practice.

If there's any documentation on this, feel free to post a link.

Best regards

Hi @va1entin,

What you mention looks aligned with general good practices, for example in Debian system logs belong to adm group, and users can be added to this group if they need to read or collect logs.
For SSL, if the certificates are only intended to be used by filebeat user, then they can be owned by him and have permissions only for this user, so no other user apart of root can read them.
Also, if you are using docker, apart of running the container with the filebeat user you can also provide an additional layer of security by mounting only the files the container needs to read and do it in read-only mode, so they cannot be altered in any case.

Hey @jsoriano,

thanks for your reply!
I asked for feedback on the some IRCs aswell and I'll document what I am doing now for fellow users who ask themselves this question:

Filebeat:

  • I built my own Dockerfile which does nothing besides grabbing the original filebeat image and putting the user "filebeat" in the "adm" group. As you mentioned this is enough for it to be able to read log files that are mounted to the container as volumes.
    You can use the image if you want, I put it on Dockerhub, there's also a link to GitHub there: https://hub.docker.com/r/va1entin/filebeat-docker/

Kibana/Logstash/Elasticsearch:

  • I built my own Dockerfile which puts the certificate and private key I want to use from the host into the container during build. This image is then built and started by Ansible in my configuration.

The Dockerfile for Kibana looks as follows:

FROM docker.elastic.co/kibana/kibana-oss:6.2.3
USER root
COPY private.key /usr/share/kibana/private.key
COPY cert.pem /usr/share/kibana/cert.pem
RUN chown kibana:kibana /usr/share/kibana/cert.pem /usr/share/kibana/private.key
USER kibana

With this and the appropriate Kibana config SSL works immediately with the container and my host certificate and I don't need to change anything on the host.

1 Like

Be careful if you add private keys to docker images, if you distribute them anyone with access to the image will also have access to the keys.

Oh, and thanks for sharing your discoveries :slight_smile:

Sure, that's why I'm building the images that include private keys on the machines directly. The filebeat image doesn't include any keys :slight_smile:

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.