Filebeat modules (nginx, logstash, elasticsearch, kibana) don't work with autodiscover

Hi guys,

I am experiencing some problems configuring autodiscover with filebeat modules. The filebeat modules don't seem to work, the logs are not being parsed at all. On the other hand autodiscover works without any problems. Do you have any idea, what am I doing wrong? You can see below my filebeat configuration.

filebeat.autodiscover:
    providers:
      - type: kubernetes
        templates:
          - condition:
              and:
                - equals:
                    kubernetes.namespace: dev
                - contains:
                    kubernetes.labels.app: redis
            config:
              - module: redis
                log:
                  enabled: true
                  input:
                    type: docker
                    fields:
                      category: app-redis-dev
                    fields_under_root: true
                    containers.ids:
                      - "${data.kubernetes.container.id}"
                slowlog:
                  enabled: false
          - condition:
              and:
                - equals:
                    kubernetes.namespace: logging
                - contains:
                    kubernetes.labels.app: logstash
            config:
              - module: logstash
                log:
                  enabled: true
                  input:
                    type: docker
                    fields:
                      category: app-logstash-dev
                    fields_under_root: true
                    containers.ids:
                      - "${data.kubernetes.container.id}"
                slowlog:
                  enabled: false              
          - condition:
              and:
                - equals:
                    kubernetes.namespace: logging
                - contains:
                    kubernetes.labels.app: kibana
            config:
              - module: kibana
                log:
                  enabled: true
                  input:
                    type: docker
                    fields:
                      category: app-kibana-dev
                    fields_under_root: true
                    containers.ids:
                      - "${data.kubernetes.container.id}"
                slowlog:
                  enabled: false
          - condition:
              and:
                - equals:
                    kubernetes.namespace: logging
                - contains:
                    kubernetes.labels.app: elasticsearch
            config:
              - module: elasticsearch
                server:
                  enabled: true
                  input:
                    type: docker
                    fields:
                      category: app-elasticsearch-dev
                    fields_under_root: true
                    containers.ids:
                      - "${data.kubernetes.container.id}"
                slowlog:
                  enabled: false
          - condition:
              and:
                - equals:
                    kubernetes.namespace: dev
                - contains:
                    kubernetes.labels.app: myapp
            config:
              - module: nginx
                error:
                  enabled: true
                  input:
                    type: docker
                    containers.stream: stderr
                    fields:
                      category: myapp-dev
                    fields_under_root: true
                    containers.ids:
                      - "${data.kubernetes.container.id}" 
                access:
                  enabled: true
                  input:
                    type: docker
                    containers.stream: stdout
                    fields:
                      category: myapp-dev
                    fields_under_root: true
                    containers.ids:
                      - "${data.kubernetes.container.id}"

Thank you very much!

1 Like

How can you tell logs are not parsed?

Are unparsed logs still shipped?

Any errors in the filebeat logs?

The logs are shipped to Logstash->Elasticsearch and I can see them in kibana. They are not parsed from the modules (for example the nginx logs don't have the exported fields nginx.access.remote_ip_list, nginx.access.remote_ip and so on).

I deploy filebeat with this helm chart (https://github.com/helm/charts/tree/master/stable/filebeat version 1.1.2). The installed filebeat version is 6.5.4 .

I tried to enable the modules (filebeat.yml)

  filebeat.modules:
  - module: nginx
  - module: logstash
  - module: kibana
  - module: elasticsearch

but I see in the modules.d directory that all yml files end with ".disabled" (e.g. nginx.yml.disabled ). I also manually renamed nginx.yml.disabled to nginx.yml but the nginx module still doesn't work.

In the filebeat log I even see that the modules are enabled.

2019-02-21T18:07:39.364Z INFO beater/filebeat.go:101 Enabled modules/filesets: nginx (access, error), logstash (log, slowlog), kibana (log), elasticsearch (audit, deprecation, gc, server, slowlog), ()

There are no errors in the filebeat logs, everything looks fine.

Hi @steffens , do you have any working configuration of filebeat using kubernetes, autodiscover and any filebeat module?

Filebeat requires Ingest Node (Elasticsearch) in order to process the events. By sending the events indirectly to logstash you disabled the parsing.

For simple setups I'd recommend to have Filebeat directly send to Elasticsearch.

If you still want to send via Logstash you need to prepare Ingest Node anyways via filebeat setup ... (which requires the ES output to be configured).

The modules send the ingest node pipeline name via @metadata.pipeline. You can use this field in the pipeline setting in the Elasticsearch output in Logstash.

Thank you very much @steffens ......your response helped me a lot :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.