Running multiple filebeat instances

We have a system that is very chatty with regard to the number of logs produced. Our current production implementation has a single filebeat container ( in k8s ) running and it is struggling to keep up. We have adjusted resources, adding CPU and memory, but we still end up in a crash loop from time to time.

Is it possible to have 2 filebeat pods looking at the same globbed path and using a common registry to spread the load and provide some redundancy?

I don't think it is possible, each filebeat instanve needs to have its own registry.

How have you identified that filebeat is the bottleneck and not Elasticsearch?

Have you already played with different values for the number of workers and the bulk size [documentation] ?

I kept looking for the worker config in the reader, never occurred to me that it was in the writer side. Thank you for the insight... this helps.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.