Multiple pipelines bug with pipe-to-pipe config and CEF codec

Hey, Leandro! Hope you're just fine :slight_smile:

Are there any news? Were you able to reproduce this bug?

Hello,

I could not replicate it, the delimiter setting solved the split message issue in the cases I tested.

The main issue is that the cef codec may split the message if you do not specify a delimiter, the way to solve this is to specify a delimiter.

I was able to replicate this behavior and solve it with a delimiter.

This seems to be an edge case that I'm not sure you can replicate it without replicating the entire data ingestion flow you have.

For example you have something sending logs to a Kafka, then Logstash reads from Kafka and uses a tcp output that will send the logs to another logstash pipeline with a tcp input and the cef codec.

Have you tried to add the delimiter in this flow or just in the flow where you generate a random cef message?

Also, the WARN you shared is related to the tcp output plugin, not the tcp input plugin where you have the cef codec.

Yes, i've tried to use delimiter in all flows - in real ELK, in test tool and in local ELK, it didn't work at all.

Maybe real problem in TCP connection then, not in CEF codec? How can i check this?

You would need to check the messages in every point you have.

For example, you are using something to send the logs to Kafka, have you confirmed that the logs are arriving without any issues in Kafka? This can be validate consuming the data on kafka.

Then you have a tcp output, you need to confirm that the message is not being splitted here, so you would need to add another output, maybe writing to a file and then compare it with the output you have in the other side.

And then you have the TPC input, you could test adding another pair of tcp output/input and not using the cef codec to see if the message is splitted or not.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.