Error: Cannot allocate memory

Hello, I am faced to an issue since a long time without finding the solution.

I have a dedicated server with 12 Gb RAM but I can only allocate to Logstash 2Gb otherwise the process fall into an error.
It seems linked to an output plugin (exec output plugin):

[2020-12-03T09:30:03,959][ERROR][logstash.javapipeline ][hypervisor] Pipeline worker error, the pipeline will be stopped {:pipeline_id=>"hypervisor", :error=>"(ENOMEM) Cannot allocate memory - /usr/bin/dtach -n /tmp/e8cadd0f-6dc8-421e-9b53-efad6d809b4f -Ez /etc/logstash/conf.d/scripts/elm-process-alm-mineops-ev.sh -H=EM-11864 -o=\"SITE:MN - ZONE:CP5->CP6 - EM:41 Km/h - LIMIT:40 Km/h - DELTA:+<> Km/h - DATE:<>\"", :exception=>Java::OrgJrubyExceptions::SystemCallError, :backtrace=>["org.jruby.RubyProcess.spawn(org/jruby/RubyProcess.java:1651)", "org.jruby.RubyKernel.spawn(org/jruby/RubyKernel.java:1658)", "uri_3a_classloader_3a_.META_minus_INF.jruby_dot_home.lib.ruby.stdlib.open3.popen_run(uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/open3.rb:202)", "uri_3a_classloader_3a_.META_minus_INF.jruby_dot_home.lib.ruby.stdlib.open3.popen3(uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/open3.rb:98)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_exec_minus_3_dot_1_dot_4.lib.logstash.outputs.exec.receive(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-exec-3.1.4/lib/logstash/outputs/exec.rb:51)", "usr.share.logstash.logstash_minus_core.lib.logstash.outputs.base.multi_receive(/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:105)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1809)", "usr.share.logstash.logstash_minus_core.lib.logstash.outputs.base.multi_receive(/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:105)", "org.logstash.config.ir.compiler.OutputStrategyExt$AbstractOutputStrategyExt.multi_receive(org/logstash/config/ir/compiler/OutputStrategyExt.java:138)", "org.logstash.config.ir.compiler.AbstractOutputDelegatorExt.multi_receive(org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:121)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:293)"], :thread=>"#<Thread:0xd251d85@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:53 sleep>"}

How to solve this ?
I tried using 'dtach' as you can see but the same issue happen

Config:
Logstash, Elasticsearch, Kibana version 7.9.1

I run logstash with a 400MB heap, and the virtual size of the process is 2.8 GB. With a 2GB heap it would be well over 4 GB. When an exec input or output runs the process forks, so the memory usage would briefly be around 9 GB. You are not going to be able to give it much more memory if you only have 12 GB on the server.

Thanks for your fast response, so the behavior is normal in this case.
Because I can not rise the RAM, I should see if I can output the data without using the fork process.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.