I´m using filebeat 6.3.2 to ship container logs.
This is my current filebeat configuration (see below).
---
filebeat.autodiscover:
providers:
- type: docker
templates:
- condition:
contains:
docker.container.name: dwp-
config:
- type: log
paths:
- /var/lib/docker/containers/${data.docker.container.id}/*.log
json.keys_under_root: true
fields:
level: info
labels: ${data.docker.container.labels.label1}
name: filebeat
logging.level: info
fields_under_root: true
processors:
- decode_json_fields:
fields: ["log"]
target: "mslog2"
- rename:
fields:
- from: "log"
to: "mslog"
path.data: /filebeat/data
filebeat.registry_file: ${path.data}/myregistry
output.console:
enabled: true
pretty: true
When I start containers, then some containers are detected and some aren´t. In the example below, the first container 593c3fe4666ddf0c1c02fb6b9fac290da49ebdc5a137b35aed09f6fb23830b5c is detected and logfiles are shipped, the second container with id 3f98fe64b36e014545bf038fc1e077454bb67787e9935b1e5114483fdd371d52 is not detected.
Start of Containers. First Container is detected, second container isn´t.
docker run --name dwp-mno --log-driver json-file --log-opt max-size=2m --log-opt max-file=2 -d centos sh -c 'for ((j=1; j<=3000; j++)) do for ((i=1; i<=10; i++)) ; do echo " {w: $j }"; sleep 0.01; done; done'
593c3fe4666ddf0c1c02fb6b9fac290da49ebdc5a137b35aed09f6fb23830b5c
docker run --name dwp-pqr --log-driver json-file --log-opt max-size=2m --log-opt max-file=2 -d centos sh -c 'for ((j=1; j<=3000; j++)) do for ((i=1; i<=10; i++)) ; do echo " {w: $j }"; sleep 0.01; done; done'
3f98fe64b36e014545bf038fc1e077454bb67787e9935b1e5114483fdd371d52
filebeat output
Aug 30 09:22:28 xxx docker[52630]: 2018-08-30T09:22:28.250+0200 INFO log/input.go:118 Configured paths: [/var/lib/docker/containers/**593c3fe4666ddf0c1c02fb6b9fac290da49ebdc5a137b35aed09f6fb23830b5c**/*.log]
Aug 30 09:22:28 xxx docker[52630]: 2018-08-30T09:22:28.250+0200 INFO autodiscover/autodiscover.go:144 Autodiscover starting runner: input [type=log, ID=5438576869600517932]
Aug 30 09:22:28 xxx docker[52630]: 2018-08-30T09:22:28.250+0200 INFO input/input.go:88 Starting input of type: log; ID: 5438576869600517932
Aug 30 09:22:28 xxx docker[52630]: 2018-08-30T09:22:28.251+0200 INFO log/harvester.go:228 Harvester started for file: /var/lib/docker/containers/**593c3fe4666ddf0c1c02fb6b9fac290da49ebdc5a137b35aed09f6fb23830b5c**/593c3fe4666ddf0c1c02fb6b9fac290da49ebdc5a137b35aed09f6fb23830b5c-json.log
Aug 30 09:22:55 xxx docker[52630]: 2018-08-30T09:22:55.009+0200 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":137540,"time":{"ms":559}},"total":{"ticks":248990,"time":{"ms":1028},"value":248990},"user":{"ticks":111450,"time":{"ms":469}}},"info":{"ephemeral_id":"2d8784f0-bf6b-4658-8699-a0499df8838d","uptime":{"ms":7350035}},"memstats":{"gc_next":57799152,"memory_alloc":29039104,"memory_total":17526160320}},"filebeat":{"events":{"active":-52,"added":3241,"done":3293},"harvester":{"open_files":7,"running":7,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":3292,"batches":30,"total":3292},"read":{"bytes":1047},"write":{"bytes":2788639}},"pipeline":{"clients":107,"events":{"active":34,"filtered":1,"published":3263,"total":3263},"queue":{"acked":3292}}},"registrar":{"states":{"current":71,"update":3293},"writes":{"success":31,"total":31}},"system":{"load":{"1":5.32,"15":1.68,"5":2.61,"norm":{"1":0.665,"15":0.21,"5":0.3263}}}}}}
Is this a know issue/is something wrong with my configuration