Logstash Too many open files

Hi, running logstash 2.0.0-1 on Ubuntu 12.04.3 LTS ,

logstash running for about 24 hours then dying with 'Too many open files'

paste -> http://pastebin.com/YsyKPUTL

ES is running on this same host, with two hosts sending logs to LS with log-courier. Very light load test setup, 2 local hosts sending apache and auth.log logs via log-courier to LS. Wondering why LS is opening so many files.

ES ulimit is default in /etc/init.d/elasticsearch, MAX_OPEN_FILES=65535
LS ulimit is also default in /etc/init.d/logstash, LS_OPEN_FILES=16384 , is this the recommended amount?

ES, LS config -> http://pastebin.com/PVykPiNJ

Have you checked with lsof what files Logstash has opened? 16k file descriptors should be more than enough for you.

pages and pages of these, (and growing)

java 26126 logstash 261u IPv4 197665 0t0 TCP ip-10-31-0-105.ec2.internal:52812->ip- 10-31-0-105.ec2.internal:9200 (ESTABLISHED) java 26126 logstash 262u IPv4 197678 0t0 TCP ip-10-31-0-105.ec2.internal:52821->ip-10-31-0-105.ec2.internal:9200 (ESTABLISHED) java 26126 logstash 263u IPv4 197681 0t0 TCP ip-10-31-0-105.ec2.internal:52822->ip-10-31-0-105.ec2.internal:9200 (ESTABLISHED) java 26126 logstash 264u IPv4 196008 0t0 TCP ip-10-31-0-105.ec2.internal:52831->ip-10-31-0-105.ec2.internal:9200 (ESTABLISHED)

Hope this helps someone else,

logstash output elasticsearch has sniffing enabled in many example configs
( https://www.elastic.co/guide/en/beats/libbeat/current/getting-started.html#logstash-setup )

Be sure to check if sniffing is enabled ... **this causes LS to open a new IPv4 socket every 5s by default **... eventually exhausting open file limit in the OS (ubuntu 12 in my case)

On default install of ES, LS, be sure to check the following parameters

sniffing sniffing_delay

Also in the course of debugging this, I discovered kibana also sends config requests every few seconds to ES, but doesn't seem to keep opening new connections like the ES output sniffing parameter problem I mention above.

`04:14:22.150092 IP 127.0.0.1.40938 > 127.0.0.1.9200: tcp 182
     E....5@.@.......
     ......#..7}5.0.f
     ................
     ....POST./.kiban
     a/config/_search
     .HTTP/1.1..Host:
     .localhost:9200.
     .Content-Length:
     .75..Connection:
     .keep-alive....{
     "size":1000,"sor
     t":[{"buildNum":
     {"order":"desc",
     "ignore_unmapped
     ":true}}]}



04:14:22.151193 IP 127.0.0.1.9200 > 127.0.0.1.40938: tcp 357
       E....b@.@.c.....
       ....#....0.f.7}.
       ................
       ....HTTP/1.1.200
       .OK..Content-Typ
       e:.application/j
       son;.charset=UTF
       -8..Content-Leng
       th:.270....{"too
       k":1,"timed_out"
       :false,"_shards"
       :{"total":1,"suc
       cessful":1,"fail
       ed":0},"hits":{"
       total":1,"max_sc
       ore":null,"hits"
       :[{"_index":".ki
       bana","_type":"c
       onfig","_id":"4.
       3.0","_score":nu
       ll,"_source":{"b
       uildNum":9369,"d
       efaultIndex":"[f
       ilebeat-]YYYY.MM
       .DD"},"sort":["9
       369"]}]}}`

You've hit this bug that was reported about a week ago:

Thanks,
I'm using latest LS,

`root@Test:~# dpkg -l | grep logs  
 ii  logstash                           1:2.1.0-1                           An extensible logging pipeline
 
 root@Test:~# cat /etc/issue 
 Ubuntu 12.04.3 LTS \n \l

 root@Test:~# uname -a 
 Linux Test 3.2.0-57-virtual #87-Ubuntu SMP Tue Nov 12 21:53:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

Someone should consider changing the documentation ( https://www.elastic.co/guide/en/beats/libbeat/current/getting-started.html#logstash-setup ) or at least mention this in LS setup docs, at least until it's fixed. Causes even a small test setup to crash.

The fix for this is being tracked in https://github.com/elastic/elasticsearch-ruby/issues/241

@Chris_Clifton we'll try to understand how quickly this can be solved, and if it warrants a warning in the meantime. from what we see it should be easy/quick to fix :slight_smile:

Is this fixed?
In which version of LogStash is this fixed?

2 Likes