Logstash file descriptors increasing rapidly

Hi,

We are using Elastic stack 6.2.4. We have 12 logstash nodes in prod. Each node was working perfectly. Somehow each node started consuming too much file descriptors. 65000 is the max limit of file descriptors set up on each node. Restarting a node resolves our problem temporarily but after some time node starts consuming file descriptors and it takes only 15 mins for a node to reach to its max limit.

We are using beats, tcp and http input plugins. We tried to disable it one by one to see which input is causing this issue. Found out when we disable beats input everything works normal. We are not seeing any hike in ingestion rate. Before this issue logstash nodes were easily handling 15k docs/s. Current ingestion rate is 10-11k docs/s

What could be the possible issues of this? How can we prevent this?

Thanks in advance

I tried increasing fd limits from 65000 to 200000. I also updated logstash's beats input plugin. Still same facing same issue.

In the graph 11:45 is the time when we restarted logstash node and again we had to restart that node at 12.30.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.