Filebeat: docker autodiscover - do I have things right?

Hi there,

After reading and watching this information:


https://www.elastic.co/guide/en/beats/filebeat/6.3/configuration-autodiscover.html
https://www.elastic.co/webinars/elasticsearch-log-collection-with-kubernetes-docker-and-containers

I'm still not 100% certain that I have the right filebeat.yml for my use case. I have 7 "static" docker containers and a few that start and stop. I'm trying to monitor all the logs (and performance metrics) from these containers automatically using filebeat "autodiscover". One of my containers is the official Elasticsearch docker container, so if I can enhance anything by using the "elasticsearch" module, that would be cool.

CONTAINER ID        IMAGE                                    COMMAND                  CREATED             STATUS              PORTS                                            NAMES
223b4e9e9203        docker.elastic.co/beats/filebeat:6.3.0   "/usr/local/bin/dock…"   About an hour ago   Up 6 minutes                                                         filebeat
ac124ea897ec        traceloggercisco/tacassist:latest        "/bin/sh -c 'cron &&…"   12 days ago         Up 25 hours         80/tcp, 85/tcp, 0.0.0.0:85->443/tcp              tacassist
02f08fe30ed9        traceloggercisco/maintenance:latest      "crond -f"               12 days ago         Up 25 hours                                                          maintenance
3ec9b5d75b56        traceloggercisco/idm:latest              "python3 ./main.py"      12 days ago         Up 25 hours         0.0.0.0:8506->80/tcp                             idm
2f9c2b74197c        traceloggercisco/logstorage:latest       "python3 ./LogStorag…"   12 days ago         Up 25 hours                                                          logstorage
bd4b65647675        traceloggercisco/webapp:latest           "/entrypoint.sh /usr…"   12 days ago         Up 25 hours         80/tcp, 0.0.0.0:8443->443/tcp                    webapp
1db10faa880b        elasticsearch:tracelogger                "/bin/bash bin/es-do…"   3 weeks ago         Up 25 hours         0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   elasticsearch

my filebeat.yml is:

filebeat.prospectors:
  - type: docker
    containers.ids:
      - '*'

#=========================== Filebeat autodiscover ==============================
filebeat.autodiscover:
# Autodiscover docker containers and parse logs
  providers:
    - type: docker
      templates:
        - condition.contains:
            docker.container.image: elasticsearch
          config:
            - module: elasticsearch
              log:
                input:
                  type: docker
                  containers.ids:
                    - "${data.docker.container.id}"

processors:
  - add_docker_metadata:

Does this look alright? Do I have everything I need? I am seeing logs coming in, but not from all the containers. I'm assuming that filebeat should find all the containers and retrieve their logs even if filebeat has started after the other containers?

There is no Elasticsearch filebeat module in 6.3, it will only be available in 6.4. Might this be the issue?

Where did you stumble over the Elasticsearch Filebeat module? Mainly curious to figure out if we have it in our communication somewhere where it shouldn't be for 6.3.

1 Like

filebeat 6.3 documentation (https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover.html) says it supports modules (gives redis as an example though). I assumed that autodiscover supported all existing modules. Might be good to clarify that in the docs.

So what you are saying is that filebeat6.3 won't do any special parsing of the elasticsearch logs that are in my docker container. However will it still extract those container logs like any generic container using the "docker logs " API?

What would the current config be then to get all the logs from existing and new docker containers?

filebeat.prospectors:
  - type: docker
    containers.ids:
      - '*'

#=========================== Filebeat autodiscover ==============================
filebeat.autodiscover:
# Autodiscover docker containers and parse logs
  providers:
    - type: docker

processors:
  - add_docker_metadata:

Also, do I need both the filebeat.prospectors and filebeat.autodiscover config lines? I didn't see this clearly in the documentation and examples, but saw it in the webinar (which is excellent by the way!).
This webinar used the nginx module in the autodiscover example: https://www.elastic.co/webinars/elasticsearch-log-collection-with-kubernetes-docker-and-containers

Autodiscovery supports all modules which were released with this version: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules.html Elasticsearch module was not released yet.

It is correct that you can still get all the Elasticsearch logs but you don't get the parsing like you stated above.

You should only need the autodiscovery part. It will pick up the logs from all running containers and new ones. Also the processor should not be needed for the docker metadata as autodiscovery will add the data automatically.

So really all I need (for now as of filebeat6.3) for docker logs is:

#=========================== Filebeat autodiscover ==============================
filebeat.autodiscover:
# Autodiscover docker containers and parse logs
  providers:
    - type: docker

I'll open another thread for metricbeats as I'm trying to do the same config there. Thank you so much for your help!

Yes, that should do it. The part I'm not 100% sure about anymore is if you also need the

containers.ids:
- '*'

But I think it's the default if not configured so it should just work.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.