FIlebeat with services inside of docker compose via autodiscover

Alright, so lets say I have these services:

services:
  redis:
    ...
  filebeat:
    ...

And this filebeat configuration:

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - config:
          - type: docker
            containers.ids:
              - "${data.docker.container.id}"
        - condition:
            contains:
              docker.container.image: redis
          config:
            - module: redis
              log:
                input:
                  type: docker
                  containers.ids:
                    - "${data.docker.container.id}"
              slowlog:
                var.hosts: ["${data.host}:${data.port}"]

It's my understanding that I should see information related to redis come from filebeat? That it'll automatically figure out the logging situation across docker containers?

Hi @krainboltgreene and welcome :slight_smile:

You are right, with this configuration filebeat will automatically enable the redis module to collect the logs from your redis containers (if their container image contains "redis" :slight_smile: ).

Take into account that for that filebeat needs access to the docker log files, take a look to the documentation about running filebeat on docker.

1 Like

Thanks so much for the response! I'll see what I can do to figure out why I'm not getting any information.

Oh :thinking: does filebeat has access to the docker socket and to docker log files? can you see any error in filebeat logs?

No errors, but here's my full configuration file if that helps:

Can you also share your docker compose file?

@jsoriano I've updated the gist with that file.

I have reduced the config to this and it works for redis:

docker-compose.yml:

version: "3.3"
  
services:
  redis:
    image: redis:4.0.11-alpine
    healthcheck:
      test: redis-cli ping
      interval: 30s
      timeout: 10s
      retries: 3

  filebeat:
    image: docker.elastic.co/beats/filebeat:6.4.2
    command:
      - "-e"
      - "--strict.perms=false"
    restart: always
    user: root
    volumes:
      - "./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
      - "/var/run/docker.sock:/var/run/docker.sock"
      # This is needed for filebeat to load container log path as specified in filebeat.yml
      - "/var/lib/docker/containers/:/var/lib/docker/containers/:ro"
      # This is needed for filebeat to load logs for system and auth modules
      - "/var/log/:/var/log/:ro"

filebeat.yml:

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.image: redis
          config:
            - module: redis
              log:
                enabled: true
                input:
                  type: docker
                  containers.ids:
                    - "${data.docker.container.id}"
              slowlog:
                enabled: true
                var.hosts: ["${data.host}:${data.port}"]

output.console:
  pretty: true

Notice that I have added the -e flag to filebeat, this will make it to log to standard output, what is recommended for deployments in docker, and in this case it will also help you to see possible errors.

I have also seen that you have configured elasticsearch:9200 as output, and the elasticsearch service is configured with --transport.host=127.0.0.1. Is it possible that filebeat cannot connect with elasticsearch? You should be able to see it if you add the -e flag mentioned before. If that is the case try to remove this flag, or set it to --transport.host=0.0.0.0.

If this doesn't help and my example doesn't work for you (it should print the events to stdout), check that your docker daemon is configured with the json-file logging driver. You can check it with docker info.

-e is fantastic. There's a whole swath of things for me to work on apparently. Also, I'm now getting redis data!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.