Logstash stops to receive new inputs after 5 minutes

Hi.

I having trouble with some inputs from filebeats.

Original I was receiving inputs from tree differents sources in three different ports (5044, 5045, and 5046), each source in one port.

Today, we had add 6 more servers to the port 5045. Everything works fine, but after 5 minutes there aren't more inputs from the filebeats agents that use the 5045 and 5046 ports.

At first I saw on the log file a lot of this entry [Could not retrieve remote IP address for beats input]. I have found to two solutions for this. One was to upgrade to 6.2 version, and the other was to comment some lines in this file "message_listener.rb". I did the second option. That log entry dissappear. And there isn't any other error or warning in the log. But the same behaivor still happening. I started the logstash process, and after 5 minutes the inputs from the ports 5045 and 5046 stop.

Maybe is some tunning to logstash, I don't know.
Hope that anyone could give me push to fix this.

Thank you.

Hi.

I check Kibana a couple of hours later, and I found that everything stops. So I check the logs, and it looks like Logstash lost the connection to elasticsearch, and then a few minutes later the connection came back again. But this error began to appear in the lock

[2018-09-14T17:58:21,088][WARN ][io.netty.channel.AbstractChannelHandlerContext] An exception 'java.lang.NullPointerException' [enable DEBUG level for full stacktrace] was thrown by
a user handler's exceptionCaught() method while handling the following exception:
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0_152]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:1.8.0_152]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[?:1.8.0_152]
at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[?:1.8.0_152]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[?:1.8.0_152]
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1100) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:349) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:112) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:571) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:512) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:426) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:398) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:877) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_152]

I increase the values on the jvm.options file and the problem disappears. It was a tunning issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.