Memory leak in Logstash 6.0.0

Hello,

I am looking for help with Logstash 6.0.0. Winlogbeat is sending logs to Logstash 6.0.0 which is running on a 8 GB machine with 4 GB min/max heap size. Over a period of a few hours, Logstash uses up all the memory and dies. Till the time Logstash is running; it is processing all incoming logs in time.
Pls assist

Please post the logs and your config.

Thank you for responding. The log file is 2.5 GB (after compression it is about 15 MB) and it mostly contains this message:
[2018-02-15T23:59:28,078][WARN ][io.netty.channel.AbstractChannelHandlerContext] An exception 'java.lang.NullPointerException' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
java.io.IOException: Connection reset by peer
Pls advise how I can send the log file
Here is the config file:
input {
beats {
port => 5056
}
}

filter {
if [level] == "Information" {
drop {}
}
mutate {
remove_field => ["[event_data][Binary]", "[user_data][binaryData]", "[user_data][binaryDataSize]"]
}
}

output {
elasticsearch {
hosts => ["http://xx.xx.xx.xx:9200"]
index => "winlogbeat-6.0.0-%{+YYYY.MM.dd}"
document_type => "doc"
user => "xxxxxxxx"
password => "xxxxxxxx"
manage_template => false
}
}

Here is another thing: I set heap size to 2GB, after which memory utilization of Logstash is at ~ 55% on a 8 GB machine for several hours now. There is no backlog in event processing.
It there a ratio of Logstash heap size vs memory used?

If there is a log showing an OOM or crash of the Logstash process, please show that.

Do you have any non-default settings in your logstash.yml file? How are you securing the Elasticsearch cluster?

The log file shows nothing related to OOM or crash of Logstash process. Logstash was running in console mode, and it had just a single line extra than the log file which said 'Killed':
[2018-02-17T07:26:23,715][WARN ][io.netty.channel.AbstractChannelHandlerContext] An exception 'java.lang.NullPointerException' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0_151]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:1.8.0_151]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[?:1.8.0_151]
at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[?:1.8.0_151]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[?:1.8.0_151]
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1100) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:349) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:112) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:571) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:512) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:426) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:398) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:877) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Killed

I have the following non-default setting in pipelines.yml:
pipeline.workers: 12

It is a 4 core machine. The only way ElasticSearch is secured is by x-pack

I ran it again this time got the following message in the console:

java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid19661.hprof ...

A fatal error has been detected by the Java Runtime Environment:

SIGSEGV (0xb) at pc=0x00007f2cb328b0bb, pid=19661, tid=0x00007f2cb01b3700

JRE version: Java(TM) SE Runtime Environment (8.0_151-b12) (build 1.8.0_151-b12)

Java VM: Java HotSpot(TM) 64-Bit Server VM (25.151-b12 mixed mode linux-amd64 compressed oops)

Problematic frame:

V [libjvm.so+0x6960bb] java_lang_Class::signers(oopDesc*)+0x1b

Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again

An error report file with more information is saved as:

/opt/elk/hs_err_pid19661.log

If you would like to submit a bug report, please visit:

http://bugreport.java.com/bugreport/crash.jsp

Aborted (core dumped)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.