Measure number of events per pod in k8s

Hi,

I wonder if there a proper way to measure the number of emitted events per pod/container in Kubernetes, every 1 minute before they being sent to Logstash.

I see that Filebeat saves the offset of each harvested file in the registry file, is it possible to translate this offset into number of consumed lines?

If not, Is there any other efficient way to send the number of emitted events for each pod in a Kubernetes cluster as a metric to some backend?
Having a separate tool that will run for each pod and calculate the number of log lines will be probably overkilling. Assuming that Filebeat already tracks each log file, it will be much easier to make use of it somehow.

WDYT?
Thanks,

You can do the same thing with an aggregation in Elasticsearch with Kibana, and make it a graph.

Why do you want to do it on the Filebeat side?

Hi

I already use Kibana for this but it's only applied for post processing events
I need to know the rate per pod regardless what comes after Filebeat, this is because we want to stop/throttle some "spammer pods" before it's too late.

help?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.