Auditbeat [7.11.2 and 7.12.0] memory issue

Hi, I see memory issue when i'm using add_process_metadata processor.
Memory usage is growing until OOM


when i remove this processor from my config:

I have dump, but I can't upload here.

Hi @mareckii, welcome to discuss :slight_smile:

Could you share the configuration you are using?

It'd be nice to have the profile, if we can confirm this is an bug maybe you can create an issue in github and upload the profile there.

Hi,
I'm using configuration from your example here:
https://raw.githubusercontent.com/elastic/beats/7.12/deploy/kubernetes/auditbeat-kubernetes.yaml
only difference is:

output.logstash:
hosts: ["127.0.0.1:5044"]

currently i removed:

  • add_process_metadata:
    match_pids: ['process.pid']
    include_fields: ['container.id']

Just let me know where I'll upload profile

@mareckii I have been doing some quick tests with a simple auditbeat configuration and I have seen that the add_process_metadata processor has a cache for processes information whose entries are never released, but it only has an entry for each process id, so it is effectively limited by the maximum number of pids in the system.

I have tried to create many processes in a loop, to fill this cache, and the memory usage of add_process_metadata seems to grow till about 13MB, but doesn't seem to go beyond that, so it seems to be effectively limited.
Captura de pantalla de 2021-03-31 18-43-46

Even if it could be nice to remove entries of non-existing processes, 13MB doesn't seem so problematic.

Could you check in your heap dump if the memory is being consumed by add_process_metadata or by add_kubernetes_metadata?

I haven't tried with add_kubernetes_metadata, but with the configuration you are using, add_kubernetes_metadata won't enrich events if they don't have the container.id, so removing add_process_metadata, may also reduce the memory usage of the other processor.

Could you try to remove add_kubernetes_metadata while keeping add_process_metadata and check if memory usage improves?

Looking to add_kubernetes_metadata, it also has a cache, that relies on receiving delete events from the Kubernetes API to delete its entries. The problem could be there, there are cases where delete events are not always received. These caches would stay in memory forever.

Update: Pods are also removed from the cache if they are marked for termination in an update. In any case it'd be good to check if the memory issue is there.

@mareckii it'd be great if you could confirm in your scenario if the problem is with add_process_metadata or with add_kubernetes_metadata.

This is how it looks in my dump

There is nothing related with add_kubernetes_metadata on my dump.

I disabled add_kubernetes_metadata and enabled add_process_metadata
memory usage looks like here:

When i disable add_process_metadata and enable add_kubernetes_metadata

usage is here:

I've tried to increase memory limit form 200M to 1G
it takes more time (several days) but finally i got OOM

Thanks @mareckii for continuing with the investigation. I have created an issue in github, could you please try to attach your heap dumps to a comment there? Memory leak with add_process_metadata and k8s manifest for Auditbeat · Issue #24890 · elastic/beats · GitHub

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.