Errors with event splitting using the input tcp plugin

Hi, we have a problem using the logstash tcp input plugin. What is happening is that it is not splitting correctly with "\n" each event that arrives. With the tests we are doing, we are sending events with a rate of ~60 per second and the error does not happen every time, but it happens every some seconds.
The data type we are sending is a csv of 10 fields
(e.g. 1643715053132,2,111069200000398,9999999999901,9999911111313903776912,,123,06,992,81102) in which the only "\n" there is appended to the end. We have tested sending the csv data by tcp from python with socket and from java with Netty.

The version we are using is 7.16.3 on windows, but it also happens with 7.13.4 with docker on CentOS.

Hi again,
I have continued testing and still can't find where is the problem.

I have downloaded the plugin code from here:


and modified the logstash Gemfile to use it instead of the one included in logstash.
Then I have modified the channelRead method in the file, adding logs for events:

            public void channelRead(final ChannelHandlerContext ctx, final Object msg) {
                ByteBuf msgBB = (ByteBuf) msg;
                String msgStr = msgBB.toString(Charset.forName("utf-8"));
      "Event received: "+msgStr);
                decoder.decode(, (ByteBuf) msg);

We are sending the csv data from a Python script with this format:

st = "%s,2,123069200000398,9999999%s,9999911111%s,,123,06,123,12345\n"%(current_milli_time(),counter,str(randint(1, 9999999999)))

Now, we can see in the log every event received:

[2022-02-10T15:55:32,592][INFO ][logstash.inputs.tcp      ][tcp-in-stream][c43491a09699f02867d3c49411d208fc0e0309dc7eff9d341fe85064e8082ea5] Event received: 1644508532579,2,123069200000398,99999999785,99999111115769873442,,123,06,123,12345
[2022-02-10T15:55:32,607][INFO ][logstash.inputs.tcp      ][tcp-in-stream][c43491a09699f02867d3c49411d208fc0e0309dc7eff9d341fe85064e8082ea5] Event received: 1644508532594,2,123069200000398,99999999786,99999111113222013386,,123,06,123,12345
[2022-02-10T15:55:44,188][INFO ][logstash.inputs.tcp      ][tcp-in-stream][c43491a09699f02867d3c49411d208fc0e0309dc7eff9d341fe85064e8082ea5] Event received: 1644508544175,2,123069200000398,999999910375,99999111113615600681,,123,06,123,12345
[2022-02-10T15:55:44,236][INFO ][logstash.inputs.tcp      ][tcp-in-stream][c43491a09699f02867d3c49411d208fc0e0309dc7eff9d341fe85064e8082ea5] Event received: 1644508544196,2,123069200000398,999999910376,99999111117577138132,,123,06,123,12345

And, from time to time, you can clearly see the error I commented on that the events are not being separated correctly:

[2022-02-10T15:55:32,582][INFO ][logstash.inputs.tcp      ][tcp-in-stream][c43491a09699f02867d3c49411d208fc0e0309dc7eff9d341fe85064e8082ea5] Event received: ,2,123069200000398,99999999784,99999111112403560691,,123,06,123,12345

[2022-02-10T17:01:23,804][INFO ][logstash.inputs.tcp      ][tcp-in-stream][c43491a09699f02867d3c49411d208fc0e0309dc7eff9d341fe85064e8082ea5] Event received: 3,06,123,12345

[2022-02-10T17:01:47,617][INFO ][logstash.inputs.tcp      ][tcp-in-stream][c43491a09699f02867d3c49411d208fc0e0309dc7eff9d341fe85064e8082ea5] Event received: 45

[2022-02-10T17:02:59,478][INFO ][logstash.inputs.tcp      ][tcp-in-stream][c43491a09699f02867d3c49411d208fc0e0309dc7eff9d341fe85064e8082ea5] Event received: ,2,123069200000398,99999996856,99999111114498294233,,123,06,123,12345

[2022-02-10T17:03:47,309][INFO ][logstash.inputs.tcp      ][tcp-in-stream][c43491a09699f02867d3c49411d208fc0e0309dc7eff9d341fe85064e8082ea5] Event received: ,2,123069200000398,99999999927,99999111116985331172,,123,06,123,12345

What does the Python script looks like? The error could be in the script, not in Logstash.

Can you revert back to the original code of the tcp input and run logstash with the debug flag?

Also, share your logstash pipeline.

This pipeline and the Python script are the easiest way I have found to reproduce the error:


- tcp-in-stream
  config.string: |
    input {
        tcp {
            port => 12443
            codec => csv {
                columns => ["field1","field2","field3","field4","field5","field6","field7","field8","field9","field10"]
    output {
        stdout { codec => "rubydebug"}

Python script:

import socket
from random import randint
import time

HOST = "localhost"
PORT = 12443
sckt=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sckt.connect((HOST, PORT))
counter = 1
while True:
  for item in range(0,10000):
      st = "1939480745000,2,123069200000398,9999999%s,9999911111%s,,123,06,123,12345\n"%(counter,str(randint(1, 9999999999)))
      data_to_send = st.encode()    

As I said in my first message, the error is happening with both Python and Netty in Java, so I don't think it is a client error.

Once logstash is running, I am filtering the logstash log with this command to see when the field1 data is wrong

docker logs -f --tail 100 logstash-test | grep "field1\"" | grep -E -v '"[[:digit:]]{13}'

Erroneous events are arriving every few minutes:

docker logs -f --tail 100 logstash-test | grep "field1\"" | grep -E -v '"[[:digit:]]{13}'
        "field1" => "5",
        "field1" => nil,
        "field1" => "23",
        "field1" => "5",
        "field1" => "45",
        "field1" => "45",
        "field1" => "45",
        "field1" => "45",
        "field1" => "345",
        "field1" => "345",
        "field1" => nil,
        "field1" => "345",
        "field1" => "0",
        "field1" => "0",

Just to confirm, the tests are made with the original code. With debug enabled, the plugin doesn't output any information, that's why I used the "rubydebug" codec as output.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.