Elastic-agent ignores logging level setting

I am trying to reduce the output of the elastic-agent containers in Kubernetes. I have added the following parameter to the configmap:

    agent:
      logging:
        level: error

and restarted agents. There is still a lot of messages with log.level : "info" in the output of elastic-agent containers, for example:

{"log.level":"error","@timestamp":"2023-05-04T11:13:34.745Z","message":"error getting cgroup stats for V2: error fetching stats for controller io: error fetching IO stats: error getting io.stats for path /hostfs/sys/fs/cgroup: error scanning file: /hostfs/sys/fs/cgroup/io.stat: input does not match format","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"log.origin":{"file.line":234,"file.name":"report/report.go"},"service.name":"metricbeat","ecs.version":"1.6.0","log.logger":"metrics","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-05-04T11:13:34.746Z","message":"Non-zero metrics in the last 30s","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"log.origin":{"file.line":187,"file.name":"log/log.go"},"service.name":"metricbeat","monitoring":{"ecs.version":"1.6.0","metrics":{"beat":{"cpu":{"system":{"ticks":292020,"time":{"ms":70}},"total":{"ticks":3173560,"time":{"ms":990},"value":3173560},"user":{"ticks":2881540,"time":{"ms":920}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":15},"info":{"ephemeral_id":"c4591886-b4f5-4406-9670-1bc8cba7733a","uptime":{"ms":97860259},"version":"8.7.0"},"memstats":{"gc_next":111144432,"memory_alloc":68936120,"memory_total":221474835680,"rss":254377984},"runtime":{"goroutines":302}},"libbeat":{"config":{"module":{"running":5}},"output":{"events":{"acked":936,"active":0,"batches":26,"total":936},"read":{"bytes":17203},"write":{"bytes":5267435}},"pipeline":{"clients":5,"events":{"active":0,"published":936,"total":936},"queue":{"acked":936}}},"metricbeat":{"kubernetes":{"container":{"events":612,"success":612},"node":{"events":3,"success":3},"pod":{"events":315,"success":315},"system":{"events":6,"success":6}}},"system":{"load":{"1":3.42,"15":6.51,"5":5.4,"norm":{"1":0.1069,"15":0.2034,"5":0.1688}}}}},"log.logger":"monitoring","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-05-04T11:13:37.934Z","message":"Reader was closed. Closing.","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.logger":"input.filestream","service.name":"filebeat","id":"kubernetes-container-logs-mesh-controller-79d7d8f5bf-bvdtj-6ad3ab49005165cbea21943c4ec25f7f8be6d452984b47f631683c862c48abee","source_file":"filestream::kubernetes-container-logs-mesh-controller-79d7d8f5bf-bvdtj-6ad3ab49005165cbea21943c4ec25f7f8be6d452984b47f631683c862c48abee::native::145785786-64768","path":"/var/log/containers/mesh-controller-79d7d8f5bf-bvdtj_lnd-acm-04_mesh-controller-6ad3ab49005165cbea21943c4ec25f7f8be6d452984b47f631683c862c48abee.log","state-id":"native::145785786-64768","log.origin":{"file.line":321,"file.name":"filestream/input.go"},"ecs.version":"1.6.0","ecs.version":"1.6.0"}

It seems like they ignore the new logging level setting. However, agents correctly fail to start when I am using an invalid logging level value (e.g. "foobar").

Did anyone manage to force agents to "warning" or "error" log level?

Fixed and works in 8.7.1.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.