Best practice running logstash on kubernetes

Hi,

Currently my logstash instances are running on native docker. Config and pipeline config are mounted from host filesystem into logstash's container directories. I use only one host in this environment, so mounting from file system is straight forward.

I am now learning kubernetes on a private bare metal kubernetes 3 node cluster. Target is to run the complete stack on kubernetes, but I want to start with logstash. We only use memory queues so logstash looks as stateless application for me. (correct me if I am wrong). (ok, I neglect the uuid of logstash instance which I can see in monitoring)

What is best practice in kubernetes to deal with pipeline configuration?

  1. Store configuration in kubernetes as config_map (I understood it this way, that it is possible to save content of a directory to the config_map and mount this to a container directory)

  2. Use some kind of volume (accessible from all k8s cluster nodes

  3. Building a new image (elastic default image + copying my configuration)?

What is best practice in production?

I slightly tend to option 3. But to keep this configuration and the image private I need a private docker registry which is accessible from alk k8s nodes, right?

Thanks,
Andreas

no opinions? :frowning: