Java Heap Space error OutOfMemoryError Logstash

I am getting an error Logstash - java.lang.OutOfMemoryError: Java heap space. However, I have changed the heap memory in the jvm.options file from 1g to 2g to 8g to 10g. I am still getting the same errors though. How can I find out how much heap space I need to use or am I supposed to make a different change in this or another file elsewhere.

For the error in the /var/log/elasticsearch I am getting a all shards failed

The error also comes about after a while when the logstash has already been run successfully.

Error produced from running config file:

java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid110004.hprof ...
Heap dump file created [1285291394 bytes in 9.087 secs]
warning: thread "[main]>worker0" terminated with exception (report_on_exception is true):
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapCharBuffer.(
at java.nio.CharBuffer.allocate(
at java.nio.charset.CharsetDecoder.decode(
at java.nio.charset.Charset.decode(
at org.jruby.RubyEncoding.decodeUTF8(
at org.jruby.runtime.Helpers.decodeByteList(
at org.jruby.RubyString.decodeString(
at org.jruby.RubyString.toJava(
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(
at org.jruby.RubyClass.newInstance(
at org.jruby.RubyClass$INVOKER$i$$INVOKER$i$newInstance.gen)
at org.jruby.internal.runtime.methods.JavaMethod$
at java.lang.invoke.LambdaForm$DMH/1089407736.invokeVirtual_L7_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$BMH/1063494931.reinvoke(LambdaForm$BMH)
at java.lang.invoke.LambdaForm$MH/1259769769.delegate(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/296954388.guard(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/1259769769.delegate(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/296954388.guard(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/621502043.linkToCallSite(LambdaForm$MH)
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.manticore_minus_0_dot_6_dot_4_minus_java.lib.manticore.client.RUBY$method$request_from_options$0(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/client.rb:471)
at java.lang.invoke.LambdaForm$DMH/1436901839.invokeStatic_L9_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$BMH/77334939.reinvoke(LambdaForm$BMH)
at java.lang.invoke.LambdaForm$MH/589835301.delegate(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/505567264.guard(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/589835301.delegate(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/505567264.guard(LambdaForm$MH)

For you to get a 1.2 GB heap dump after an OOM in an 8 or 10 GB heap would be unusual. After the OOM a full GC is run before the dump is written, which means there were multiple GB of garbage on the heap when the OOM occurred. If that happened when using the default GC it would suggest an extremely fast object allocation and expiration rate. (It could also be allocation of a large object in a badly fragmented heap, but that would be even more unusual in my experience.)

No matter. To find the problem you should take a look at the heap dump in a tool like MAT. If you find yourself spending more than a minute trying to work out what is using all the memory then give up. It should be front and centre on the main page.

The stack trace shows it is in manticore, which is the http client that logstash uses. Are you using an http or elasticsearch input or an http filter? What does the configuration look like.

If that is the current code (and I think 0.6.4 is current) then it is building the body of the request.

1 Like

input {
file {
path => "/csvDirectory/*.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
filter {
csv {
separator => " "
columns => ["26 column names"]
mutate {
split => { "path_column_name" => " (View)$" }
split => { "variant" => "," }
split => { "module" => "," }
strip => ["linkid","itemrev","uid"]
remove_field => "[message]"
output {
elasticsearch {
hosts => ["hostname:9200"]
index => "index_name"
document_id => "%{8 different fields}"

Believe it or not I had a lot of trouble with out of memory.
turn out that all started when we remove all the swap space off.

we were new to this and was just following what elk suggest.

after turnning swapon back it all go away.

The issue could be that it is pulling in many csv files all of which are very large in size, up to 5gb for one single csv file.

It seems to have been fixed by increasing the heap size on Logstash. I was only increasing on the Elastic servers. It went over my head to increase the heap size of the logstash server instead. @Badger

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.