I think I might have gone down the wrong path here so just need a little guidance and some pros/cons to what I've done vs other ways of solving this problem.
I have multiple docker containers each with a microservice in, and more will come in time. Each logs out in various different formats, apache, gunicorn, custom nlog, custom python logs etc.
Right now I'm exposing the log files from each container with docker volume mappings in the docker-componse.yml
e.g.:
services:
vehicle_service:
image: ...
volumes:
- ../logs/microservice1/:/root/microservce1/logs
Then filebeat in my logging stack (4 ELKB docker containers) is configured like:
services:
filebeat:
image: ...
volumes:
- ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ../deployment/logs:/usr/share/logs/deployment
with the filebeat.yml
:
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/share/logs/deployment/**/*.log
tags: ["microservice"]
I can use different subfolders under logs to filter out different types of logs and tag them for different parsing in logstash.
This all works fine.
BUT, is this a good way to do it? It seems wrong to have this file location dependency between the microservice containers and my logging stack containers. And I need to think about where this common folder location is... if my services are distributed on different nodes I need network storage.
I've seen other examples using the docker type and a filebeat prospector. Is this a better way for my scenario?
Thanks