We are using Elastic stack 6.2.4. We have 12 logstash nodes in prod. Each node was working perfectly. Somehow each node started consuming too much file descriptors. 65000 is the max limit of file descriptors set up on each node. Restarting a node resolves our problem temporarily but after some time node starts consuming file descriptors and it takes only 15 mins for a node to reach to its max limit.
We are using beats, tcp and http input plugins. We tried to disable it one by one to see which input is causing this issue. Found out when we disable beats input everything works normal. We are not seeing any hike in ingestion rate. Before this issue logstash nodes were easily handling 15k docs/s. Current ingestion rate is 10-11k docs/s
What could be the possible issues of this? How can we prevent this?
Thanks in advance