"Cannot allocate memory" in Logstash exec plugin

I use the Logstash exec plugin to dump info about our running database process every 10s. This has worked fine over the last weeks, but now on one host it failed to report data.
Checking the Logstash logs, I see this error repeated every 10s (reformatted for readability):

[2019-05-27T13:29:30,129][ERROR][logstash.inputs.exec     ] Error while running command {
:command=>"my_command_to_query_status",
:e=>#<Errno::ENOMEM: Cannot allocate memory - source /data/home/db2inst1/.bashrc && /data/home/db2inst1/sqllib/adm/db2pd -hadr -db dsxdb>,
:backtrace=>[
"org/jruby/RubyIO.java:3835:in `popen'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-exec-3.3.2/lib/logstash/inputs/exec.rb:97:in `run_command'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-exec-3.3.2/lib/logstash/inputs/exec.rb:71:in `execute'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-exec-3.3.2/lib/logstash/inputs/exec.rb:47:in `block in run'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:234:in `do_call'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:258:in `do_trigger'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:300:in `block in start_work_thread'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:299:in `block in start_work_thread'",
"org/jruby/RubyKernel.java:1292:in `loop'", "/data/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:289:in `block in start_work_thread'"
]}

A restart of Logstash has helped, and the data is reported again. But I wonder how soon this might fail yet again.

I found another old thread that reported the same issue, but got no answers: Logstash Exec Input Plugin throws OutofMemory Error

Hi,

I was the original poster of the problem you have linked to. I couldn't find any solution (and haven't tested with newer versions) but to circumvent the problem I switched from exec based pull model to tcp based push model.

Rather than have logstash use exec to run a process, I now run the process using supervisor and in the process I push the result over TCP to my logstash instance (over localhost). In the logstash pipeline, I use the tcp input plugin to read the data and process it.

Hope this helps you.

1 Like

Hi Dheeraj,

thanks, that sounds like a good idea.
I could open a Logstash tcp input on 127.0.0.2:6789, run my exec command with cron and pipe it into nc.

A simple test conf for future reference:

input {
  tcp {
    port => 6789
    host => "127.0.0.2"
	codec => multiline {
	  pattern => "JUSTADUMMY"
	  what => "previous"
	  negate => true
	}
  }
}

output {
  stdout {}
}

Now I can run this command as a test: ls -la | nc 127.0.0.2 6789

Unfortunately cron can only run with a granularity of 1min, so I'll have to rig something up with several cron entries and a sleep 10 && to get to my 10s interval :slight_smile:

Another alternative might be to use the unix socket input instead, which might be more lightweight?

1 Like

I have no performance or foot-print comparisons.

But here's an old thread for logstash 2.x which states TCP is 5x times faster than unix.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.