Inotify leak

hello team,

we found auditbeat consumer a lot inotify recently, and we are using the latest auditbeat version

We find that auditbeat pods created many inotify instances > 2000, but most of them are not used, i.e. no inotify watches. Looks that there's some inotify instances leak in the auditbeat.
root@k8s-node:~# ps -ef | grep 123543
root 123543 118052 33 Mar07 ? 3-05:34:55 /usr/local/bin/auditbeat -e -v -c /etc/auditbeat/auditbeat.yml -httpprof localhost:20063
root@k8s-node:~# ls -l /proc/123543/fd/996
lr-x------ 1 root root 64 Mar 7 16:00 /proc/123543/fd/996 -> anon_inode:inotify
root@k8s-node:~# cat /proc/123543/fdinfo/996
pos: 0
flags: 02000000
mnt_id: 15

root@k8s-node:~# ls -l /proc/123543/fd/* | grep inotify | wc -l

root@k8s-node:~# cat /proc/123543/fdinfo/* | grep inotify | wc -l

Hi @masonlu2014,

Have you encountered this issue in prior versions of auditbeat? Does what you're seeing look similar to this issue.

1 Like

Hi @masonlu2014 thanks for reporting this.

Could you provide some additional information

  • What version of auditbeat is this (latest sometimes means different things to different people and can change from day to day)?
  • Could you tell us what if any additional auditbeat modules you're using?
  • Have you configured the backend value in your config? if so to what?
  • What kernel version are you running on these hosts?

hello Nick,

we are using kernel 5.15.0-26-generic
no, i think we don't conffigure any backend, ( i think default is using inofity)

we are only using FIM and auditd module , nothing more

sorry, i think we are using auditbeat v7. however i compare with auditbeat v7 and v8 version. the FIM module which using inoftiy monitor file change , that code is almost same here

thanks for checking, the issue looks like simiarly. but it is almost 4 years ago ?