Filebeat logs /tmp/filebeat.{host}.root.log.INFO.{date}

Same case with this closed topic

Filebeat is deployed on a kubernetes cluster as daemon set, and generates thousands of files under /tmp.

This caused extremely high inode usage preventing the system from being able to deploy pods on the node.

I tried disabling logging to no avail.

filebeat.yml
    queue.mem:
      events: 4096
      flush.min_events: 512
      flush.timeout: 5s
    filebeat.inputs:
    - type: container
      paths: 
        - '/var/lib/docker/containers/*/*.log'
      json.keys_under_root: true
      json.ignore_decoding_error: true
      ignore_older: 48h
      close_inactive: 24h
    processors:
    - add_kubernetes_metadata:
        host: ${HOSTNAME}
    - decode_json_fields:
        when.regexp.message: "^{.*}$"
        fields: ["message"]
        process_array: true
        max_depth: 3
    - dissect:
        when:
          and:
            - has_fields: ['kubernetes.namespace']
            - equals.kubernetes.namespace: "ingress"
        tokenizer: "%{remote_addr} - [%{remote_user}] - - [%{time_local}] \"%{request}\" %{status} %{body_bytes_sent} \"%{http_referer}\" \"%{http_user_agent}\" %{request_length} %{request_time} [%{proxy_upstream_name}] %{upstream_addr} %{upstream_response_length} %{upstream_response_time} %{upstream_status} %{req_id}"
        field: "message"
        target_prefix: "data"
    - convert:
        when.not.has_fields: ['timestamp']
        fields:
          - {from: "@timestamp",                  to: "timestamp"}
    - convert:
        fields:
          - {from: "kubernetes.container.name",   to: "kubernetes.container_name"}
          - {from: "kubernetes.container.image",  to: "kubernetes.container_image"}
          - {from: "kubernetes.pod.name",         to: "kubernetes.pod_name"}
          - {from: "kubernetes.node.name",        to: "kubernetes.host"}
          - {from: "kubernetes.namespace",        to: "kubernetes.namespace_name"}
        fail_on_error: false
    - drop_fields:
        fields: 
          - host
          - agent
          - input
          - ecs
          - log
          - stream
          - json
          - kubernetes.pod
          - kubernetes.container
          - kubernetes.replicaset
          - kubernetes.namespace
          - kubernetes.node
        ignore_missing: true
    logging.json: true
    logging.level: error
    logging.to_files: false
    setup.ilm.enabled: false
    setup.template.enabled: false
    setup.template.overwrite: false
    setup.template.pattern: ""
    setup.template.name: ""
    output:
      elasticsearch:
        hosts: ["${ELASTICSEARCH_URL}"]
        username: '${ELASTICSEARCH_USERNAME}'
        password: '${ELASTICSEARCH_PASSWORD}'
        indices:
          - index: "${INDEX_PREFIX}.%{[kubernetes.namespace_name]}.%{+yyyy.MM.dd}"

seems to be exactly the same issue as the previous post, if i comment out the add_kubernetes_metadata processor the log files are not created.

this is somehow related to this issue

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.