Logstash to logstash results in illegible data

We have a scenario where we are attempting to configure a logstash server within a location/vpc with multiple inputs in order to gather the appropriate information (lets call it LS-A), but then wish to use a singular output to forward this information to another logstash system (which actually contains all the filters before doing it's own output to elasticsearch, we'll call it LS-B).

To do this work we setup a simple TCP output/input. When examining the logs on LS-B, the data we get, however, appears to be raw binary, or some hex-heavy alien coding. Basically it's like some kind of codec mix-up. To circumvent this, we tried setting the codec to json manually, but to no avail. As soon as that data is sent over the TCP output it's basically garbage.

Setting up the inputs directly on LS-B and bypassing LS-A works fine. Setting up LS-A to send directly to ElasicSearch works fine. It's only when trying to forward through two LS servers that we seem to have an issue.

Anyone run into this?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.