Hey, folks! Please help me solve this problem.
Logstash 6.1
Have this in logs:
......
[2017-12-25T21:23:21,010][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ~[?:1.8.0_151]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) ~[?:1.8.0_151]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) ~[?:1.8.0_151]
at io.netty.channel.socket.nio.NioServerSocketChannel.doReadMessages(NioServerSocketChannel.java:135) ~[logstash-input-tcp-5. 0.2.jar:?]
at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:75) [logstash-input-tc p-5.0.2.jar:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:571) [logstash-input-tcp-5.0.2.jar:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:512) [logstash-input-tcp-5.0.2.jar:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:426) [logstash-input-tcp-5.0.2.jar:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:398) [logstash-input-tcp-5.0.2.jar:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:877) [logstash-input-tcp-5.0.2.jar :?]
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) [logstash-input- tcp-5.0.2.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
[2017-12-25T21:23:22,013][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ~[?:1.8.0_151]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) ~[?:1.8.0_151]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) ~[?:1.8.0_151]
at io.netty.channel.socket.nio.NioServerSocketChannel.doReadMessages(NioServerSocketChannel.java:135) ~[logstash-input-tcp-5. 0.2.jar:?]
at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:75) [logstash-input-tc p-5.0.2.jar:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:571) [logstash-input-tcp-5.0.2.jar:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:512) [logstash-input-tcp-5.0.2.jar:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:426) [logstash-input-tcp-5.0.2.jar:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:398) [logstash-input-tcp-5.0.2.jar:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:877) [logstash-input-tcp-5.0.2.jar :?]
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) [logstash-input- tcp-5.0.2.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
......
after this logstash is dying. Every 1-2 hours.
ulimit -n = 500000 | for service MAX_OPEN_FILES = 500000
LS config is:
path.data: /var/lib/logstash
pipeline.workers: 4
pipeline.output.workers: 4
path.config: /etc/logstash/conf.d/*.conf
config.reload.automatic: true
config.reload.interval: 3600
path.logs: /var/log/logstash
2 TCP inputs with json codec, 8 conditions in filter (just mutate type field if in JSON present key-field), 8 conditions in output section ( checking type and send to ES by elasticsearch-output-plugin with special index, sniffing disabled(!))
I was forced to create a crontab task to restart logstash every hour! Now its working, but its not a solution.