Filebeat autodiscover for docker does not work on /var/lib/docker/containers

Hi,

I am quite puzzled about the autodiscover feature for "tea"ing docker logs.
This look quite useful, but despite reading the documentation and the few posts about it, I could not manage to have it fully work.
To be clear : I managed to get a copy of the logs from within /var/lib/docker/volumes/whatever, but I did not manage to perform the same with /var/lib/docker/containers

Filebeat runs as long as the elk stack on a swarm environnement.
They are running under their own network, thus are identified by the service name (e.g. logstash:5044)

Here's my config ::

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        # conditions are not compulsory. I have tried to remove them, but the problem remains
        - condition.or:
          - contains.docker.container.name: "apache"
          - contains.docker.container.name: "nginx"
          config:
          - type: docker
            containers.ids:
               - ${data.docker.container.id}
            multiline:
                pattern: '^\[#|\d{4}'
                negate:  true
                match:   after
          - type: log
            paths: # various attempts
            - /var/lib/docker/containers/${data.docker.container.id}/*.log # KO
            - /var/lib/docker/containers/631c570fcd974c248142726cf9c4a1aa21d45cc7b9a39a0eed8dc73eeb41d1df/*.log #KO
            - /var/lib/docker/containers/*/*.log #KO
            - /var/lib/docker/volumes/${data.docker.container.name}/_data/localhost_access_log*.txt #OK
            - /var/lib/docker/volumes/whoAmTest_tmpvol*/_data/*.log #OK

# I tried to add this later, according to a post I read here : https://discuss.elastic.co/t/problem-getting-autodiscover-docker-to-work-with-filebeat/144349/10
filebeat.inputs:
  - type: docker
    containers.ids:
      - "*"
			
processors:
 - add_docker_metadata:
     host: "unix:///var/run/docker.sock"

output.logstash:
  hosts: ["logstash:5044"]
  bulk_max_size: 4096

I know some paths are not following the documentation guidelines, but there are here for the sole purpose of giving a few clues about the problem (I hope!).

I see no error on filebeat logs, nor on ES or logstash.
It's just not working for the logs from /var/lib/docker/containers//.log!

Any idea / hint?

I have a similar issue, but I am new to setting this all up. Here's my filebeats.yml which I bake into the image

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

processors:
  - drop_fields:
      fields:
        - "docker.containers.labels"
  - add_host_metadata: ~
  - add_cloud_metadata: ~

output.logstash:
  hosts: "${BEATS_HOST_PORT:log:5044}"
logging.level: warning

The drop fields was from Using AutoDiscover feature for Docker does not work when running in Swarm mode but that didn't work for me either.

My docker-compose file has the following service

  docker-beats:
    image: trajano.net/docker-beats
    volumes:
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
    deploy:
      mode: global

Unfortunately I just get all the logs with no real attribution except for the file beats server

Seems that in Graylog it requires fields to contain the data so I altered the filebeat to contain and logs start to appear now

processors:
  - drop_event:
      when:
        contains:
          docker.container.image: "trajano.net/docker-beats"
  - rename:
      fields:
        - from: "docker.container.image"
          to: "fields.docker_container_image"
        - from: "docker.container.name"
          #to: "fields.docker_container_name"
          to: "fields.source"
        - from: "docker.container.id"
          to: "fields.docker_container_id"
  - rename:
      fields:
        - from: "docker.container.labels.com.docker.compose.service"
          to: "fields.docker_compose_service"
        - from: "docker.container.labels.com.docker.compose.project"
          to: "fields.docker_compose_project"
      ignore_missing: true

  - rename:
      fields:
        - from: "docker.container.labels.com.docker.swarm.node.id"
          to: "fields.docker_swarm_node"
        - from: "docker.container.labels.com.docker.swarm.task.name"
          to: "fields.docker_swarm_task"
        - from: "docker.container.labels.com.docker.swarm.service.name"
          to: "fields.docker_swarm_service"
        - from: "docker.container.labels.com.docker.stack.namespace"
          to: "fields.docker_stack_namespace"
      ignore_missing: true

However, I am expecting the labels to still trigger some change or initial parsing, but I can't seem to get that part working the labels in question are written as

  web:
    image: trajano/nginx-letsencrypt
    deploy:
      labels:
        - co.elastic.logs/module=nginx
        - co.elastic.logs/fileset.stdout=access
        - co.elastic.logs/fileset.stderr=error

But looking through the debug logs it does not appear to do anything with it.

@RogerLapin

Sorry for the delay did you get it working? By default the docker input should pick up logs from /var/lib/docker/containers.

@trajano This look like another issue I would create a separate thread for that. Also if you drop the "docker.containers.labels you are probably dropping the attribution.

I actually got rid of

 - drop_fields:
      fields:
        - "docker.containers.labels"

I'll spawn off a separate thread.

in my last version but comparing mine and the OP, I wonder if the OP added the /var/lib/docker/containers/ volume

@pierhugues

No, I am still blind here : don't know why it's not working.
I forgot to mention elk version ; 6.4.2.

May it be related to the swarm mode?

@RogerLapin If you set debug log level do you get any errors in the Filebeat's logs?

@pierhugues I don't see any errors, or message that would indicate any problème with the autodiscover feature. It looks like it takes all the configured path into consideration as expected, and althouth it states that the feature is "beta", I see no related problem in the logs...

Though, the logs are especially verbose, and I am not sure what I should be looking for (but for obvious errors or warnings)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.