Best practices for docker container application log shipping with Filebeat to Logstash

Hi,

I have a docker service of two containers. They both have a web server component which produces access, error and debug logs and one container has special module attached to the web server component, which produces two different kinds of logs on its own. I have mounted a persistent docker volume for both of the containers and the logs are written to the volume. A third separate container running Filebeat has been launched to read and track the logs from the persistent volume and to ship them to Logstash (I think this is similar to a "sidecar" pattern). My Logstash pipeline is not yet fully configured because I don't know what would be the best approach.

I think the easiest way would be to use add_tags processor and specific tags which would help me write appropriate filtering in Logstash (Logstash eventually forwards everything to Elasticsearch).

Second solution that I came up with would be to make a second Filebeat container and delegate the web logs forwarding to one Filebeat container and the module logs to the second Filebeat container. In that case I could use, if it would benefit me, two different pipelines in Logstash. The application container shared volume will still be the log file storage method.

The third option I thought of is (not sure if doable) would be to append the first solution to have Filebeat forward all logs to one Logstash pipeline and then do initial filtering to forward the results to another set of pipelines for more specific filtering and ETL.

I looked at some similar discussions and no pattern of best practices seem to emerge. My question would be that should I prefer or avoid any of the prementioned solutions or is there a fourth one better than the ones I came up with?

Thanks in advance!

The third option would use a distributor pattern based on tags.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.