We are seeing that auditbeat service is using lot of memory in our VM's , we suspect that when elasticsearch cluster is down its taking more memory in auditbeat to buffer the data. Could you please let me know if we can do some update in auditbeat.yml so that it will not buffer any data in memory
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20849 root 20 0 27.4g 24.6g 38784 S 71.4 44.1 683:48.32 auditbeat
Kindly look into this issue. We are seeing this issue on multiple servers even if the elasticsearch cluster is UP.
One option we tried is disabled socket and process module and we could see the cpu/memory usage has considerably decreased and still monitoring. But would like to know if there is any thing more we can do to resolve this issue.
2nd the request. This also plays in with the ones pre packages with the ingest manager. As an idea of a cap 512Mb to 1Gb max should be all that they use with no more then 50% CPU. I had metricbeat take a few of my servers as it took 6Gb+ ram from the host and it starved out the application.
If you are still seeing conditions when Auditbeat consumes lots of memory you can take a heap profile while it's running that can be used to identify what's consuming the space. To prepare you need to have Auditbeat running with the added CLI flag of --httpprof localhost:8080.
Then when you want to capture a heap profile you can use curl to save it.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.