High memory usage of Metricbeat in Kubernetes

I am trying out collecting metrics and logs from pods in our staging Kubernetes cluster. I have followed the instructions here: Run Metricbeat on Kubernetes | Metricbeat Reference [8.1] | Elastic

The cluster has two nodes and 269 running pods. There is a metricbeat pod on each of the two nodes and each eats up about 250 MB of memory, which seems a bit too much to me. When running metricbeat on a server without containerization, the memory footprint is barely 50 MB.

Also, the default Kubernetes manifests provided by Elastic declare resources.limits.memory: 200Mi which made the pods OOMKilled until I raised that limit myself, so that suggests to me something is off.

Is such a high memory usage to be expected? If so, is there any ways to optimize? (E.g. only collect metrics from some pods - determined by labels, annotations etc).

Using version 7.16.3. Thanks!

hi, how did you solve this? thanks in advance

I did not, still waiting for suggestions.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.