I'm using elasticstack for monitoring and logging our kubernetes cluster (with elasticsearch
, filebeat
, metricbeat
, and kibana
).
For some reason metricbeat
collects unprecedented amount of data for just few pods (after a one night it filled 3GB of disc memory).
Right now I'm playing with metricbeat
yaml configuration to enable/disable certain modules and inspect the footprint but it's a trial-and-error approach.
Did anyone configured metricbeat
service before and can recommend what should be the most efficient configuration or know how to inspect memory footprint of each metrics module?
Thank you for any tips and guidance.