Pipe output plugin not passing event

Hello..

I've written a python script to create tickets when certain SNMP traps are received. It works great at the command line when I cat a sample event into it.. but when run from logstash with the pipe plugin it's not getting the event data. I know it's running the script when traps come in.. but with no actual event data. I was initially running with 2 filter workers but have dropped back to one for debugging. Any ideas what's going wrong here?

I'm using logstash-1.5.2 from the rpm. ruby 2.0.0p645 (2015-04-13) The config section is this.

output {
if ("ticket" in [tags]) {
pipe {
command => "/opt/env/loghandler/bin/pyTT &"
}
}

And the messages I see with debug logging turned on are..

:message=>"Error writing to pipe, closing pipe.", :command=>"/opt/env/loghandler/bin/pyTT &", :pipe=>#<IO:fd 1087>, @active=false>, :error=>#<Errno::EBADF: Bad file descriptor - Bad file descriptor>, :level=>:error, :file=>"logstash/outputs/pipe.rb", :line=>"49", :method=>"receive"}

and sometimes with this:

:error=>#<IOError: Stream closed>

Have you tried removing & from the end of the command? When a shell backgrounds a command I think it must close the child's stdin, otherwise the child will steal input from the parent in a (normally) not so useful way.

If the reason you're backgrounding the command is that it takes too long to run and stalls the Logstash pipeline I suggest you set up a simple broker to buffer the messages. Or, even simpler, buffer the event in a file that pyTT can read (and delete) when it runs.

Hmm.. removing the & now reveals a different problem. The python script is blocking on reading stdin. According to the docs sys.stdin.read() will read until it gets an EOF.. which works fine when run from the command line but the pipe plugin does not close the pipe after writing the event. Inserting a drop_pipe(command) just after the pipe.puts(command) fixes this problem... but in context I'm not sure if that's the right thing to do here?

The idea of the pipe output is to write multiple events through the same pipe since forking off a new process for every event would potentially be very inefficient, so in that sense the plugin is doing the right thing. If you don't want to change your script you could wrap it with a script that understands when it has received a full message and kicks off a new pyTT process.

I'm fine with making changes.. but just trying to understand the expected behavior here. Also.. the docs for this plugin could use some more detail. =] So.. when calling a script via pipe it's expected to stay open to handle multiple events... but also return quickly as not to block logstash? Is there an example implementation of how to do both these things or am I missing something?

I'm fine with making changes.. but just trying to understand the expected behavior here. Also.. the docs for this plugin could use some more detail. =]

I agree. Feel free to file an issue about it.

So.. when calling a script via pipe it's expected to stay open to handle multiple events... but also return quickly as not to block logstash?

No, that was my misunderstanding. I also missed that a process handles multiple messages. As long as Logstash can write to the pipe every time an event arrives (i.e. the pipe buffer won't fill up) you'll be okay.

@magnusbaeck i have an issue with pipe as well Logstash pipe plugin exec not working
Can you please help.