Filebeat Components ignores logging level

Hi all,
We have an eck environment in a onprem k8s cluster with elasticsearch, kibana, logstash (http input) and elastic-agent as a standalone for the container logs. Version for all is 8.11.4
This is fine so far.

Our problem is that the agent generates a lot of debug logs. But the log level from the agent is default (info) and also the filebeat process inside the agent pod is level info. Unfortunately some modules/components/processors seem to ignore this setting.

To reduce the logs, I am dropping some log sources, but I would like to reduce the whole output.
Is it possible to change the log level to info/warning for each element within the agent?

our agent manifest with config:

---
apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata: 
  name: eck
spec:
  version: 8.11.4
  elasticsearchRefs: 
    - name: eck
  daemonSet:
    podTemplate:
      spec:
        serviceAccountName: elastic-agent
        automountServiceAccountToken: true
        securityContext:
          runAsUser: 0
        containers:
        - name: agent
          imagePullPolicy: Always
          image: {{ DOCKER_REPO_URL }}/bv/elastic-agent:8.11.4
          resources:
                requests:
                    memory: 1Gi
                    cpu: 0.2
                limits:
                    memory: 1Gi
                    cpu: 1
          volumeMounts:
          - mountPath: /var/lib/docker/containers
            name: varlibdockercontainers
          - mountPath: /var/log/containers
            name: varlogcontainers
          - mountPath: /var/log/pods
            name: varlogpods
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
        volumes:
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
        - name: varlogcontainers
          hostPath:
            path: /var/log/containers
        - name: varlogpods
          hostPath:
            path: /var/log/pods
  config:
    agent:
      monitoring:
        enabled: false
        use_output: default
        logs: false
        metrics: false
      logging:
        to_files: false
        to_stderr: false
        metrics:
          enabled: false
    inputs:
      - id: container-log-${kubernetes.pod.name}-${kubernetes.container.id}
        type: filestream
        processors:
          - drop_event:
              when:
                or:
                - equals:
                    log.source: "filestream-default"
                - equals:
                    log.source: "filestream-monitoring"
        use_output: default
        meta:
          package:
            name: kubernetes
            version: 1.52.0 #https://docs.elastic.co/en/integrations/kubernetes#changelog
        streams:
          - id: container-log-${kubernetes.pod.name}-${kubernetes.container.id}
            data_stream:
              dataset: agent.k8s
              type: logs
              namespace: ${kubernetes.container.name}_${kubernetes.namespace}
            prospector.scanner.symlinks: true
            parsers:
              - container:
                  format: auto
              - ndjson:
                  target: ""
                  add_error_key: false
                  message_key: message
                  overwrite_keys: true
                  ignore_decoding_error: true
            paths:
              - /var/log/containers/*${kubernetes.container.id}.log

the filebeat process

root          1      0  0 08:14 ?        00:00:00 /usr/bin/tini -- /usr/local/bin/docker-entrypoint -e -c /etc/agent.yml
root          7      1  3 08:14 ?        00:07:03 elastic-agent container -e -c /etc/agent.yml
root        362      7 10 08:16 ?        00:17:40 /usr/share/elastic-agent/data/elastic-agent-dccb92/components/filebeat -E setup.ilm.enabled=false -E setup.template.enabled=false -E management.enabled=true -E management
.restart_on_output_change=true -E logging.level=info -E logging.to_stderr=true -E gc_percent=${FILEBEAT_GOGC:100} -E filebeat.config.modules.enabled=false -E http.enabled=true -E http.host=unix:///usr/share/elastic-agen
/state/data/tmp/7uyxrneW50ZMk_CZbrE0Hjg3idmd7AzZ.sock -E path.data=/usr/share/elastic-agent/state/data/run/filestream-default

some logs:

{"log.level":"debug","@timestamp":"2024-01-17T11:11:17.474Z","message":"Skipping add_kubernetes_metadata processor as kubernetes metadata already exist","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.logger":"kubernetes","log.origin":{"file.line":308,"file.name":"add_kubernetes_metadata/kubernetes.go"},"service.name":"filebeat","libbeat.processor":"add_kubernetes_metadata","ecs.version":"1.6.0","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2024-01-17T11:11:17.475Z","message":"Skipping add_kubernetes_metadata processor as kubernetes metadata already exist","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"service.name":"filebeat","libbeat.processor":"add_kubernetes_metadata","ecs.version":"1.6.0","log.logger":"kubernetes","log.origin":{"file.line":308,"file.name":"add_kubernetes_metadata/kubernetes.go"},"ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2024-01-17T11:11:17.475Z","message":"Skipping add_kubernetes_metadata processor as kubernetes metadata already exist","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.logger":"kubernetes","log.origin":{"file.line":308,"file.name":"add_kubernetes_metadata/kubernetes.go"},"service.name":"filebeat","libbeat.processor":"add_kubernetes_metadata","ecs.version":"1.6.0","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2024-01-17T11:11:17.475Z","message":"Skipping add_kubernetes_metadata processor as kubernetes metadata already exist","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"service.name":"filebeat","libbeat.processor":"add_kubernetes_metadata","ecs.version":"1.6.0","log.logger":"kubernetes","log.origin":{"file.line":308,"file.name":"add_kubernetes_metadata/kubernetes.go"},"ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2024-01-17T11:11:17.475Z","message":"Skipping add_kubernetes_metadata processor as kubernetes metadata already exist","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.logger":"kubernetes","log.origin":{"file.line":308,"file.name":"add_kubernetes_metadata/kubernetes.go"},"service.name":"filebeat","libbeat.processor":"add_kubernetes_metadata","ecs.version":"1.6.0","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2024-01-17T11:11:17.475Z","message":"End of file reached: /var/log/containers/nginx-ingress-controller-qqtwt_ingress-nginx_controller-785aaef5a618293ca60e8ebe60292c96978bedc15efd3ff5d59a4e0a545d0a15.log; Backoff now.","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"service.name":"filebeat","state-id":"native::89132419-64773","source_file":"filestream::container-log-nginx-ingress-controller-qqtwt-785aaef5a618293ca60e8ebe60292c96978bedc15efd3ff5d59a4e0a545d0a15::native::89132419-64773","path":"/var/log/containers/nginx-ingress-controller-qqtwt_ingress-nginx_controller-785aaef5a618293ca60e8ebe60292c96978bedc15efd3ff5d59a4e0a545d0a15.log","ecs.version":"1.6.0","log.logger":"input.filestream","log.origin":{"file.line":131,"file.name":"filestream/filestream.go"},"id":"container-log-nginx-ingress-controller-qqtwt-785aaef5a618293ca60e8ebe60292c96978bedc15efd3ff5d59a4e0a545d0a15","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2024-01-17T11:11:17.508Z","message":"Start next scan","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.logger":"file_watcher","log.origin":{"file.line":120,"file.name":"filestream/fswatch.go"},"service.name":"filebeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2024-01-17T11:11:17.511Z","message":"Start next scan","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.logger":"file_watcher","log.origin":{"file.line":120,"file.name":"filestream/fswatch.go"},"service.name":"filebeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2024-01-17T11:11:17.512Z","message":"File scan complete","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"service.name":"filebeat","written":0,"renamed":0,"removed":0,"created":0,"log.logger":"file_watcher","log.origin":{"file.line":224,"file.name":"filestream/fswatch.go"},"total":1,"truncated":0,"ecs.version":"1.6.0","ecs.version":"1.6.0"}

thanks
marc

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.