Problem getting autodiscover docker to work with filebeat

I'm trying to get the filebeat.autodiscover feature working with type:docker. If I put in this default configuration, I don't see anything coming into Elastic/Kibana (although I am getting the system, audit, and other logs.

#=========================== Filebeat autodiscover ==============================
filebeat.autodiscover:
# Autodiscover docker containers and parse logs
  providers:
    - type: docker
      containers.ids:
        - "*"

When I try to add the prospectors as recommended here: https://github.com/elastic/beats/issues/5969

filebeat.prospectors:
- type: docker
  containers.ids:
    - "*"

I get this error from filebeats, probably because I am using filebeat.inputs for monitor another log path:

Exiting: prospectors and inputs used in the configuration file, define only inputs not both

filebeat.inputs:

  #Tracelogger logs
  - type: log
    paths:
        - /var/log/tracelogger/*.log

prospectors are deprecated in favour of inputs in version 6.3. Basically input is just a simpler name for prospector. Change prospector to input in your configuration and the error should disappear.

Changed the config to "inputs" (error goes away, thanks) but still not working with filebeat.autodiscover. I am getting metricbeat.autodiscover metrics from my containers on same servers.

#=========================== Filebeat autodiscover ==============================
filebeat.autodiscover:
# Autodiscover docker containers and parse logs
  providers:
    - type: docker
      containers.ids:
        - "*"

#=========================== Filebeat Inputs ==============================
filebeat.inputs:

  #Tracelogger logs
  - type: log
    paths:
        - /var/log/tracelogger/*.log

  - type: docker
    containers.ids:
      - "*"

From metricbeats.

Full config:

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  keepfiles: 7
  permissions: 0644

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
logging.selectors: ["*"]

#==========================  Modules configuration =============================
filebeat.modules:

#------------------------------- System Module -------------------------------
- module: system
  # Syslog
  syslog:
    enabled: true
    var.paths: ["/var/log/syslog*"]
  auth:
    enabled: true
    var.paths: ["/var/log/auth.log*"]

#------------------------------- Auditd Module -------------------------------
- module: auditd
  log:
    enabled: true
    var.paths: ["/var/log/audit/audit.log*"]

#=========================== Filebeat autodiscover ==============================
filebeat.autodiscover:
# Autodiscover docker containers and parse logs
  providers:
    - type: docker
      containers.ids:
        - "*"

#=========================== Filebeat Inputs ==============================

filebeat.inputs:

  #Tracelogger logs
  - type: log

    # Paths that should be crawled and fetched. Glob based paths.
    # To fetch all ".log" files from a specific level of subdirectories
    # /var/log/*/*.log can be used.
    # For each file found under this path, a harvester is started.
    # Make sure not file is defined twice as this can lead to unexpected behaviour.
    paths:
        - /var/log/tracelogger/*.log

  - type: docker
    containers.ids:
      - "*"

#========================== Elasticsearch output ===============================
output.elasticsearch:
  hosts: ["http://elk-alln-001:9200"]

#============================== Dashboards =====================================
setup.dashboards:
  enabled: true
setup.kibana:
   host: "http://elk-alln-001:5601"

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
xpack.monitoring.enabled: true

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
xpack.monitoring.elasticsearch:

You cannot use Filebeat modules and inputs at the same time in the same Filebeat instance. If you have a module in your configuration, Filebeat is going to read from the files set in the modules. Inputs are ignored in this case.

To collect logs both using modules and inputs, two instances of Filebeat needs to be run. One configuration would contain the inputs and one the modules. You can see examples of how to configure Filebeat autodiscovery with modules and with inputs here: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover.html#_docker_2

Let me know if you need further help on how to configure each Filebeat.

1 Like

So there is no way to configure filebeat.autodiscover with docker and also using filebeat.modules for system/auditd and filebeat.inputs in the same filebeat instance (in our case running filebeat in docker?

Is there any technical reason for this as it would be much easier to manage one instance of filebeat in each server. I have no idea how I could configure two filebeats in one docker container, or maybe I need to run two containers with two different filebeat configurations?

An aside: my config with the module: system and module: auditd is working with filebeat.inputs - type: log. It is just the docker logs that aren't being grabbed. Are you sure there is a conflict between modules and input as I don't see that.

Update: I can now see some inputs from docker, but I'm not sure if they are working via the filebeat.autodiscover or the filebeat.input - type: docker? Also there is no field for the container name - just the long /var/lib/docker/containers/ path

{
  "_index": "filebeat-6.3.2-2018.08.14",
  "_type": "doc",
  "_id": "eKnrOGUBTKU9LsL1v-JY",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2018-08-14T14:51:36.895Z",
    "prospector": {
      "type": "docker"
    },
    "input": {
      "type": "docker"
    },
    "beat": {
      "name": "cdc-rtp-001",
      "hostname": "cdc-rtp-001",
      "version": "6.3.2"
    },
    "host": {
      "name": "cdc-rtp-001"
    },
    "source": "/var/lib/docker/containers/d83e61ad52a74c1b9aebbaa55ffac794fb0e39e85270af8b3fb93ccf55199700/d83e61ad52a74c1b9aebbaa55ffac794fb0e39e85270af8b3fb93ccf55199700-json.log",
    "offset": 80936337,
    "stream": "stdout",
    "message": "2018-08-14 14:51:36,895 INFO DeviceMonitor : [check_active_devices] Currently Monitoring 0 devices"
  },
  "fields": {
    "@timestamp": [
      "2018-08-14T14:51:36.895Z"
    ]
  },
  "sort": [
    1534258296895
  ]
}

You can have both inputs and modules at the same time. I confused it with having the same file being harvested by multiple inputs. I also misunderstood your problem. :frowning:

Thanks for that. Is there anyway to get the docker metadata for the container logs - ie to get the name rather than the local mapped path to the logs? I see this:

{
  "_index": "filebeat-6.3.2-2018.08.14",
  "_type": "doc",
  "_id": "eKnrOGUBTKU9LsL1v-JY",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2018-08-14T14:51:36.895Z",
    "prospector": {
      "type": "docker"
    },
    "input": {
      "type": "docker"
    },
    "beat": {
      "name": "cdc-rtp-001",
      "hostname": "cdc-rtp-001",
      "version": "6.3.2"
    },
    "host": {
      "name": "cdc-rtp-001"
    },
    "source": "/var/lib/docker/containers/d83e61ad52a74c1b9aebbaa55ffac794fb0e39e85270af8b3fb93ccf55199700/d83e61ad52a74c1b9aebbaa55ffac794fb0e39e85270af8b3fb93ccf55199700-json.log",
    "offset": 80936337,
    "stream": "stdout",
    "message": "2018-08-14 14:51:36,895 INFO DeviceMonitor : [check_active_devices] Currently Monitoring 0 devices"
  },
  "fields": {
    "@timestamp": [
      "2018-08-14T14:51:36.895Z"
    ]
  },
  "sort": [
    1534258296895
  ]
}

My config is this:

filebeat.autodiscover:
# Autodiscover docker containers and parse logs
  providers:
    - type: docker
      containers.ids:
        - "*"
        
filebeat.inputs:
  - type: docker
    containers.ids:
      - "*"

The autodiscover documentation is a bit limited, as it would be better to give an example with the minimum configuration needed to grab all docker logs with the right metadata. Rather than something complicated using templates and conditions: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover.html

1 Like

To add more info about the container you could add the processor add_docker_metadata to your configuration: https://www.elastic.co/guide/en/beats/filebeat/master/add-docker-metadata.html

I thought, (looking at the autodiscover pull request/merge: https://github.com/elastic/beats/pull/5245) that the metadata was supposed to work automagically with autodiscover. Also it isn't clear that above and beyond putting in the autodiscover config in the filebeat.yml file, you also need to use "inputs" and the metadata "processor".

I took out the filebeat.inputs : - type: docker and just used this filebeat:autodiscover config, but I don't see any docker type in my filebeat-* index, only type "logs". This is a direct copy of what is in the autodiscover documentation, except I took out the template condition as it wouldn't take wildcards, and I want to get logs from all containers.

filebeat.autodiscover:
# Autodiscover docker containers and parse logs
  providers:
    - type: docker
      templates:
          config:
            - type: docker
              containers.ids:
                - "${data.docker.container.id}"
              exclude_lines: ["^\\s+[\\-`('.|_]"]  # drop asciiart lines

OK, in the end I have it working correctly using both filebeat.autodiscover and filebeat.inputs and I think that both are needed to get the docker container logs processed properly. I wish this was documented better, but hopefully someone can find this and it helps them out. I still don't know if this is 100% correct, but I'm getting all the docker container logs now with metadata.

Thanks @kvch for your help and responses!

filebeat.autodiscover:
# Autodiscover docker containers and parse logs
  providers:
    - type: docker
      templates:
          config:
            - type: docker
              containers.ids:
                - "${data.docker.container.id}"
              exclude_lines: ["^\\s+[\\-`('.|_]"]  # drop asciiart lines
              
filebeat.inputs:
  - type: docker
    containers.ids:
      - "*"
    processors:
      - add_docker_metadata:
2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.