I've seen a number of threads on this but none that seem to give any diagnosis or recommendations.
We are running Filebeat (various 6.x versions) in Kubernetes, and it seems that whatever memory limit we set the pods eventually get OOM killed and start again. This isn't a significant problem from the point of view of log ingestion - some log data arrives a minute or three late - but it is a problem for our Ops people who don't like to see "pod killed because out of memory" alerts all the time.
Is there any way of
- finding out what is going on
- finding out how much memory Filebeat should be using so we can get the allocation right in K8s
- configuring Filebeat so as not to use an ever-increasing amount of memory
?