Java Heap Space error OutOfMemoryError Logstash

I am getting an error Logstash - java.lang.OutOfMemoryError: Java heap space. However, I have changed the heap memory in the jvm.options file from 1g to 2g to 8g to 10g. I am still getting the same errors though. How can I find out how much heap space I need to use or am I supposed to make a different change in this or another file elsewhere.

For the error in the /var/log/elasticsearch I am getting a org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed

The error also comes about after a while when the logstash has already been run successfully.

Error produced from running config file:

java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid110004.hprof ...
Heap dump file created [1285291394 bytes in 9.087 secs]
warning: thread "[main]>worker0" terminated with exception (report_on_exception is true):
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapCharBuffer.(HeapCharBuffer.java:57)
at java.nio.CharBuffer.allocate(CharBuffer.java:335)
at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:795)
at java.nio.charset.Charset.decode(Charset.java:807)
at org.jruby.RubyEncoding.decodeUTF8(RubyEncoding.java:269)
at org.jruby.runtime.Helpers.decodeByteList(Helpers.java:2439)
at org.jruby.RubyString.decodeString(RubyString.java:797)
at org.jruby.RubyString.toJava(RubyString.java:6221)
at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:98)
at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:195)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:378)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:213)
at org.jruby.java.proxies.ConcreteJavaProxy$InitializeMethod.call(ConcreteJavaProxy.java:60)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:211)
at org.jruby.RubyClass.newInstance(RubyClass.java:997)
at org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)
at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneOrTwoOrNBlock.call(JavaMethod.java:353)
at org.jruby.java.proxies.ConcreteJavaProxy$NewMethod.call(ConcreteJavaProxy.java:165)
at java.lang.invoke.LambdaForm$DMH/1089407736.invokeVirtual_L7_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$BMH/1063494931.reinvoke(LambdaForm$BMH)
at java.lang.invoke.LambdaForm$MH/1259769769.delegate(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/296954388.guard(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/1259769769.delegate(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/296954388.guard(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/621502043.linkToCallSite(LambdaForm$MH)
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.manticore_minus_0_dot_6_dot_4_minus_java.lib.manticore.client.RUBY$method$request_from_options$0(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/client.rb:471)
at java.lang.invoke.LambdaForm$DMH/1436901839.invokeStatic_L9_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$BMH/77334939.reinvoke(LambdaForm$BMH)
at java.lang.invoke.LambdaForm$MH/589835301.delegate(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/505567264.guard(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/589835301.delegate(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/505567264.guard(LambdaForm$MH)

For you to get a 1.2 GB heap dump after an OOM in an 8 or 10 GB heap would be unusual. After the OOM a full GC is run before the dump is written, which means there were multiple GB of garbage on the heap when the OOM occurred. If that happened when using the default GC it would suggest an extremely fast object allocation and expiration rate. (It could also be allocation of a large object in a badly fragmented heap, but that would be even more unusual in my experience.)

No matter. To find the problem you should take a look at the heap dump in a tool like MAT. If you find yourself spending more than a minute trying to work out what is using all the memory then give up. It should be front and centre on the main page.

The stack trace shows it is in manticore, which is the http client that logstash uses. Are you using an http or elasticsearch input or an http filter? What does the configuration look like.

If that is the current code (and I think 0.6.4 is current) then it is building the body of the request.

1 Like

input {
file {
path => "/csvDirectory/*.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => " "
columns => ["26 column names"]
}
mutate {
split => { "path_column_name" => " (View)$" }
split => { "variant" => "," }
split => { "module" => "," }
strip => ["linkid","itemrev","uid"]
remove_field => "[message]"
}
}
output {
elasticsearch {
hosts => ["hostname:9200"]
index => "index_name"
document_id => "%{8 different fields}"
}
}

Believe it or not I had a lot of trouble with out of memory.
turn out that all started when we remove all the swap space off.

we were new to this and was just following what elk suggest.

after turnning swapon back it all go away.

The issue could be that it is pulling in many csv files all of which are very large in size, up to 5gb for one single csv file.

It seems to have been fixed by increasing the heap size on Logstash. I was only increasing on the Elastic servers. It went over my head to increase the heap size of the logstash server instead. @Badger

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.