I have a series of filebeats running in a kubernetes environment that are being repeatedly OOMKilled. I have my filebeat pods configured to run with memory requests of 250m and limits of 500m. I run the filebeats with --memprofile configured. What I'm curious about is why the memory profiles show only about 20-50MB of inuse_space. I expected them to be somewhere near the 500MB limit settings. As a result, I don't know what to make of the 450MB+ discrepancy.
I've attached three examples.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.