I use the Logstash exec plugin to dump info about our running database process every 10s. This has worked fine over the last weeks, but now on one host it failed to report data.
Checking the Logstash logs, I see this error repeated every 10s (reformatted for readability):
[2019-05-27T13:29:30,129][ERROR][logstash.inputs.exec ] Error while running command {
:command=>"my_command_to_query_status",
:e=>#<Errno::ENOMEM: Cannot allocate memory - source /data/home/db2inst1/.bashrc && /data/home/db2inst1/sqllib/adm/db2pd -hadr -db dsxdb>,
:backtrace=>[
"org/jruby/RubyIO.java:3835:in `popen'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-exec-3.3.2/lib/logstash/inputs/exec.rb:97:in `run_command'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-exec-3.3.2/lib/logstash/inputs/exec.rb:71:in `execute'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-exec-3.3.2/lib/logstash/inputs/exec.rb:47:in `block in run'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:234:in `do_call'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:258:in `do_trigger'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:300:in `block in start_work_thread'",
"/data/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:299:in `block in start_work_thread'",
"org/jruby/RubyKernel.java:1292:in `loop'", "/data/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:289:in `block in start_work_thread'"
]}
A restart of Logstash has helped, and the data is reported again. But I wonder how soon this might fail yet again.
I found another old thread that reported the same issue, but got no answers: Logstash Exec Input Plugin throws OutofMemory Error