I'm using filebeat to collect log files, and on one of my servers, filebeat's memory usage is high.
I would like to limit the memory usage of filebeat.
I have set up queue.mem by referring to the following page, but the situation is the same as before.
(I think the default value was used because it was not mentioned before the configuration)
Why does the memory usage not change?
Also, is there a better way?
top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1851 mysql 20 0 6724m 3.8g 6608 S 7.3 48.3 50:59.88 mysqld
2229 root 20 0 1075m 57m 23m S 2.7 0.7 0:04.29 filebeat
Fortunately, we are not experiencing any problems now.
The server was rebooted early in the morning, and it seems that filebeat was temporarily overloaded.
It seems to have settled down now, but exceeding 1G of virtual memory and 60MB of real memory seems to be a bit of a burden on the server, and we are looking for a way to deal with it.
This may be unrelated to the problem, but there seems to be a large number of processes.
ps -efL | grep filebeat | wc -l
12
The number of logs I am sending to logstash with filebeat is four.
Is the number of processes supposed to be this large?
I am decreasing the values of events and flush.min_events in steps and observing the values with the top command, but the values of VIRT and RES are not decreasing.
events is set from 4096 to 2048, 1024, 512, 256, 128
The min_events has been reduced by 6 steps from 2048 to 1024, 512, 256, 128, 64
but there is no change in the values.
Do these operations inherently lower the memory value?
Or should I assume that the memory usage is at a reasonable value and will not go any lower?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.