What is proper configuration for metrics in elasticstack to reduce memory footprint?

I'm using elasticstack for monitoring and logging our kubernetes cluster (with elasticsearch , filebeat , metricbeat , and kibana ).

For some reason metricbeat collects unprecedented amount of data for just few pods (after a one night it filled 3GB of disc memory).

Right now I'm playing with metricbeat yaml configuration to enable/disable certain modules and inspect the footprint but it's a trial-and-error approach.

Did anyone configured metricbeat service before and can recommend what should be the most efficient configuration or know how to inspect memory footprint of each metrics module?

Thank you for any tips and guidance.

Hi @sokolmateusz, from the information you provide is difficult to understand what may be happening here.

Why do you say is "unprecedented"? Could you share the configuration you are using and what you are trying to monitor in details?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.