Java heap oom on logstash

Hi I'm using the latest one version of logstash 8.6.0 I meet the same processing error every day for one of pipeline
in the meantime I decreased of count of workers and event and increased heap but still is not enough still get oom, How I can prevent for such cases?

java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid1.hprof ...
Heap dump file created [17137852563 bytes in 167.954 secs]
[FATAL] 2023-01-31 11:08:43.217 [[npdb_dns]>worker2] Logstash - uncaught error (in thread [npdb_dns]>worker2)
java.lang.OutOfMemoryError: Java heap space
        at org.jruby.RubyString.cat19(org/jruby/ ~[jruby.jar:?]
        at org.jruby.RubyString.cat19(org/jruby/ ~[jruby.jar:?]
        at org.jruby.ext.stringio.StringIO.stringIOWrite(org/jruby/ext/stringio/ ~[jruby.jar:?]
        at org.jruby.ext.stringio.StringIO.write(org/jruby/ext/stringio/ ~[jruby.jar:?]
        at java.lang.invoke.LambdaForm$DMH/0x0000000801299000.invokeVirtual(java/lang/invoke/LambdaForm$DMH) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x00000008012a0c00.invoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.DelegatingMethodHandle$Holder.delegate(java/lang/invoke/DelegatingMethodHandle$Holder) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x000000080127bc00.guard(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.DelegatingMethodHandle$Holder.delegate(java/lang/invoke/DelegatingMethodHandle$Holder) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x000000080127bc00.guard(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.Invokers$Holder.linkToCallSite(java/lang/invoke/Invokers$Holder) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_1_minus_java.lib.logstash.outputs.elasticsearch.http_client.bulk(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:143) ~[?:?]
        at java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java/lang/invoke/DirectMethodHandle$Holder) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x0000000801f33c00.invoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x0000000800cad000.invokeExact_MT(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at org.jruby.RubyEnumerable$ ~[jruby.jar:?]
        at org.jruby.RubyArray.each(org/jruby/ ~[jruby.jar:?]
        at org.jruby.RubyArray$INVOKER$i$0$0$$INVOKER$i$0$0$each.gen) ~[jruby.jar:?]

What are the specs of your Logstash server and how many memory did you configure for the heap?

What is the configuration of workers and batch size for your pipelines?

There is not much to do if you are having OOM errors, the solution is to increase the memory or reduce the usage decreasing the workers and batch size.

Are you using persistent queues or memory queues?

spec of server:

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 44
Model name:            Intel(R) Xeon(R) CPU           X5670  @ 2.93GHz
Stepping:              2
CPU MHz:               1596.000
CPU max MHz:           2926.0000
CPU min MHz:           1596.0000
BogoMIPS:              5867.45
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              12288K
NUMA node0 CPU(s):     0-5,12-17
NUMA node1 CPU(s):     6-11,18-23
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb kaiser tpr_shadow vnmi flexpriority ept vpid dtherm arat

configuration of memory for logstash -> 20GB where we have 10GB for heap

and I'm using memory queues for 4 pipelines

log.level: info
config.reload.automatic: true
config.reload.interval: 30s
pipeline.ecs_compatibility: disabled
pipeline.workers: 8
pipeline.batch.size: 1000
pipeline.batch.delay: 50
pipeline.ordered: auto

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.