Is it ok to run Logstash in Docker for Production?

I don't see much writing about this out on the web. Is it recommended to run Logstash in Docker for a production environment? What are some things to look out for? I am looking to run multiple logstash services in Kubernetes.

I haven't seen any reports of problems and I've been running Logstash in Docker myself (along with various other applications, both JVM-based and otherwise) for over two years with very few Docker-related issues.

You'll obviously have to evaluate the matter in your own environment.

Over the past few months, Elastic has come out with some amazing videos and webinars about how they recommend to run Logstash in Kubernetes. This is one of them: https://www.elastic.co/webinars/elasticsearch-log-collection-with-kubernetes-docker-and-containers

Autodiscovery is the crux. Instead of say, configuring every RabbitMQ pod to also have a Logstash agent (as either another process in RMQ container or another container in same pod) along with making sure the proper configuration is in there, you just run a daemonset of Logstash. As pods come and go over time, it will autodetect which ones are running RMQ and scrape the interesting logs. Your use of Docker images assures RMQ consistently puts its logs in the same place in that container. The Logstash daemonset "pulls" from your pods with k8s/Docker internals. You don't have to train your app/agent to "push".

Now take this concept and apply it to other types of pods you want to scrape logs for. Again for other kinds of Beats (metric, packet). Realize metadata about the docker/k8s/host/cloud are being appended to your log events for you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.