Overload of Filebeat?

Hello,

I try to uplaod 53 000 000 lines split into about 600 files with Filebeat 5.1 (Stack in 5.1)

I was surprised by the slowness of the upload of lines and i was look the log of filebeat and surprise there is 2 errors ... :frowning:

ERR Failed to create tempfile (/var/lib/filebeat/registry.new) for writing: open /var/lib/filebeat/registry.new: too many open files
ERR Writing of registry returned error: open /var/lib/filebeat/registry.new: too many open files. Continuing...

They return in a loop ... So do you know how to resovle this problem ?

You may need to increase the number of file descriptors allowed for the process if you are reaching the limit (see man ulimit and man limits.conf).

@andrewkroh

I have modified this settings but it's continue :

elasticsearch.yml :
bootstrap : true

systemd/system/elasticsearch.service
LimitMEMLOCK=infinity

But you talk about an other paramters ?

What you described are settings for Elasticsearch. You need to change settings for Filebeat. What OS are you using?

If your OS uses systemd you could add LimitNOFILE=<new limit> to the unit file for Filebeat at /lib/systemd/system/filebeat.service. (ref: https://unix.stackexchange.com/questions/345595/how-to-set-ulimits-on-service-with-systemd/345596)

Ok i changed it i haven't error now

OS : RH 7.1
Stack : 5.1

This topic was automatically closed after 21 days. New replies are no longer allowed.