Kubernetes autodiscover not shipping logs to logstash

Hi, I'm having issues passing logs to logstash from filebeat.
Filebeat version: 6.3.2

filebeat.yml: |-
    logging.level: ${LOG_LEVEL}
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          in_cluster: true
          templates:
            - condition:
                or:
                  - equals:
                      kubernetes.namespace: development
                  - equals:
                      kubernetes.namespace: sqa
                  - equals:
                      kubernetes.namespace: test
                  - equals:
                      kubernetes.namespace: stage
              config:
                - type: docker
                  containers.ids:
                    - "${data.kubernetes.container.id}"
                  multiline.pattern: '^[[:space:]]'
                  multiline.negate: false
                  multiline.match: after

    output.logstash:
      hosts: ['${LOGSTASH_HOSTS}']

kubernetes.yaml based on filebeat-autodiscover-kubernetes.yaml

The pipeline worked fine when i specified providers docker type, so I know data should pass to logstash.

I could use some help setting up correct debug flags in kubernetes manifest also!

Thanks.

I suspect my mistake was not having processors entry in the same level as filebeat.autodiscover.

filebeat.yml: |-
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          templates:
            - condition:
                or:
                  - equals:
                      kubernetes.namespace: kube-system
                  - equals:
                      kubernetes.namespace: default
              config:
                - type: docker
                  containers.ids:
                    - "${data.kubernetes.container.id}"
                  exclude_lines: ["^\\s+[\\-`('.|_]"]  # drop asciiart lines
                  multiline.pattern: '^[[:space:]]'
                  multiline.negate: false
                  multiline.match: after
    processors:
      - add_kubernetes_metadata:
          in_cluster: true
    output.logstash:
      hosts: ['${LOGSTASH_HOSTS}']

Adding add_kubernetes_metadata will not solve your problem. It adds metadata on kuberetes to your events. If you cannot read the input, there won't be any events to add metadata to.

You could try to change equals in the list of conditions to contains. Is that the part you have added recently, correct?

Thanks for your feedback. I added processors and now events are being logged as desired. It works as I expect now. So my initial thought was that the processors entry was needed. I don't know if it's always required or not, but it solved my issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.