Filebeat deployment in Kubernetes/Docker

Hi, as you probably know this is a quite new feature :slight_smile:, we are continually adding new features to it to improve user experience, so I want to thank you for your feedback, it helps us to shape future features.

For the case of NGINX, it sends access logs to stdout, and error logs to stderr. I recently implemented a way to filter on the stream, so you can pass the correct output to each fileset of a module. This is how it will look in filebeat:

filebeat.autodiscover:
  providers:
   - type: docker
     templates:
       - condition:
           contains:
             docker.container.name: "nginx"
         config:
           - module: nginx
             access:
               prospector:
                 type: docker
                 containers.stream: stdout
                 container.ids:
                   - "${data.docker.container.id}"
             error:
               prospector:
                 type: docker
                 containers.stream: stderr
                 container.ids:
                   - "${data.docker.container.id}"

I've opened a new issue to allow to define a default fallback, in case none of the templates matches.

As a note, you don't need to define paths parameter, it's already defined for you

Thanks,

  1. It was not working otherwise... I have to put the path of the exact container (see my config)

  2. Only 'access' file-set is working - if I put bot (like your config) - all messages end in error in elasicsearch (because actually there is only one log file in container of both streams)

  3. kubernetes metadata fields ignored in the condition part (when launched by kubernetes).

  4. What about my apps logs (not nginx, apache, etc) - how to define it??

  1. I see now why it wasn't working for you, there is a typo: container.ids -> containers.ids.

  2. It depends on how nginx is configured, it must output error log to stderr. Also if you see parsing errors, it's because stream filter is not yet working (unreleased).

  3. you are using docker autodiscover provider, we plan to release a kubernetes autodiscover one soon. It will give you access to kubernetes metadata.

  4. You can add more conditions for them, templates setting is a list, so you can put as many as needed

Hi,
I am also trying to do something similar to what Asher is doing. I currently have a prospector that adds Docker metadata to my logs and ships it to Logstash. It looks like this:

filebeat:
  prospectors:
    - type: log
      paths:
       - '/var/lib/docker/containers/*/*.log'
      json.message_key: log
      json.keys_under_root: true
      processors:
       - add_docker_metadata: ~

What I want to understand is how Autodicover works. Should I replace the prospector with the Autodiscover settings, or does Autodiscover apply to what my prospectors are generating?

When you use filebeat.prospectors you define a static configuration, it will be the same while Filebeat is running.

Autodiscover allows you to define conditions and launch different configurations live, based on Autodiscover events from the provider (Docker).

If that configuration works for you, you don't need to use autodiscover. If, on the contrary, you want to apply different patterns depending on the container, you may want to define autodiscover rules for that.

Best regards

Oh okay. Got it. I will switch to Autodiscover and see how it goes. Thanks Carlos!

Hi,
I meant, logs lines were transferred to ElasticSearch but not parsed.
If I leave only 'access' - then it's ok (but only these lines).
In nginx container - they do a symlink from error log to stderr, and from access log to stdout - In terms of Docker, you see only one file ---container-id---.json in var/lib/docker/containers/---container-id--
Asher

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.