Filebeat harvest only currently running docker containers

I am currently running filebeat to harvest all logs in /var/lib/docker/containers//-json.log however this harvests all logs even if a container isn't running anymore. Is there a way to get around this or perhaps a different approach should be used?

Hi @kwojcicki,

This is something that should not worry you if you plan to ship all logs, initial sync may be an issue though.

You can consider pruning your previous state, have a look to docker system prune command.

Also, we are already working on new features that will help with this https://github.com/elastic/beats/pull/5245

Hey @exekias thanks for the quick reply. The auto discover feature looks great however until that is out I am worried about dev's running filebeat locally. Asking them to constantly docker container prune seems annoying when they are starting/stopping many containers constantly. I saw your comment about using drop_event to whitelist/blacklist containers on this github issue https://github.com/elastic/beats/issues/918#issuecomment-335999673 . Is this a viable solution if so how would it work?

Understood, There are several things you can do:

1- Put the docker id of the container you are interested in into filebeat.yml, you can probably script this

2- Whitelist containers by name, image or labels:

For example, you can do something like this to drop any event not matching any of the images you want

filebeat.prospectors:
- type: log
  paths:
   - '/var/lib/docker/containers/*/*.log'
  json.message_key: log
  json.keys_under_root: true
  processors:
  - add_docker_metadata: ~
  - drop_event.when.not.or:
      - equals:
          docker.contaner.image: busybox
      - equals:
          docker.contaner.image: alpine

We decided to go for option #1 as mentioned was a fairly simple script :grinning:. The auto discover feature is there an ETA for when it will be merged and generally available?

I don't have an ETA on when it will merged, although it's pretty advanced :slight_smile: Once that happens it should make it to the next release in the 6.X line.

Sounds good I will follow the github PR to see when its merged. Thanks for the help this can be closed