Strange hanging/buffering behavior with tcp plugin


#1

I am feeding one logstash output into the input of another logstash instance. I am using the tcp output plugin (in logstash A) for data output, and tcp input plugin for data input (logstash B). I'm seeing some strange behavior here in that no matter how many "logs" I send through logstash A, none show up in logstash B until I quit logstash A. And what shows up in Logstash B is only the very first log I sent through logstash A. So I know that logstash A definitely knows where logstash B lives and can pipe data to it. But something seems to be hanging and/or buffering the output from logstash B.
Below are my configs:

output config:

input {
	stdin {}
}

filter {}

output {
	stdout { codec => rubydebug }
	tcp {
		mode => "client"
		host => "localhost"
		port => 5043
	}
}

input config:

input {
	tcp {
		mode => "server"
		port => 5043
	}
}

filter {
	json {
		source => "message"
	}
	json {
		source => "message"
	}

}

output{
	stdout { codec => rubydebug }
}

I have asked a similar question here -- Logstash tcp input/output -- but no solutions thus far. Just didn't want the topic to slowly die, because I'd like to figure out a solution asap. Maybe I should submit a defect ticket on logstash project on github?


(Magnus Bäck) #2

It's a codec mismatch problem. The tcp output defaults to emitting JSON messages on a single line (which isn't consistent with the documentation that states that each event will be followed by a newline) while the tcp input waits for a newline before emitting anything. Changing to the json_lines codec on both the input and the output fixes the problem (and you won't need any json filters to parse the JSON payload).


#3

I can't use the json_lines codec for some reason..


#4

It must be to do with using the courier input to receive the event.. because I don't get this error when i just use the stdin input and type my own event eg, "{}"


(system) #5