Logstash stops processing the logs after sometime for tcp input

Hi,

In my ELK stack(5.6.1), I have configured my logstash to listen on both TCP(5000) and filebeat(5044) input.

Logs coming from filebeat is working fine but logs coming from TCP stops after some time without any error. I have checked the logstash logs in debug mode but couldn't find any error logs.

A python script is used to send logs to TCP port. When I ran the script, logstash logs shows the below debug log

[DEBUG][logstash.pipeline ] output received {"event"=>{"vid"=>"vutus", "@timestamp"=>2019-02-25T11:26:16.483Z, "port"=>49116, "@version"=>"1", "host"=>"x.x.x.x", "id"=>"1234567890", "message"=>"This is dummy message to test the logstash by vutus", "aws"=>{"invoked_function_arn"=>"lambda_funtion_arn", "memory_limit_in_mb"=>"1.1MB", "function_version"=>"1.0", "function_name"=>"lambda_funtion"}, "timestamp"=>"Fri Feb 22 06:57:08 2019"}}

After some time when the logstash stops processing the application logs, I ran the python script manually and I couldn't find any log in the logstash logs(like described above). But the python script is run successfully.

I have checked the memory and CPU but there are no issues.

The python script that I have used is

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send((json.dumps(log_entry)+ "\n").encode("UTF-8"))
#(log_entry is json formated log)

logstash configuration is

input {
    tcp {
        port => 5000
        codec => json
    }
    beats{
        port => 5044
    }
}

Can anyone give me the right direction to solve this?

What does the rest of the configuration look like? It sounds like the output is having issues, resulting in back-pressure on the input which prevents it reading anything more.

Thanks for the reply.

There are no filters in my config.

The output of my config looks like

output {
if [type] == "rules" or  [type] == "alert" or [type] == "alertsecurity" or [type] =~ /tomcat[0-9]-prpc-[a-z,-]*/ {
    elasticsearch {
        hosts => "${ESURL_PORT_9200_TCP_ADDR}:9200"
        manage_template => false
        index => "prpc"
    }
}
else if [type] == "catalina" or [type] =~ /tomcat[0-9]-[a-z]*/ {
    elasticsearch {
        hosts => "${ESURL_PORT_9200_TCP_ADDR}:9200"
        manage_template => false
        index => "tomcat"
    }
}
else if [type] =~ /haproxy-[a-z]*/ {
    elasticsearch {
        hosts => "${ESURL_PORT_9200_TCP_ADDR}:9200"
        manage_template => false
        index => "haproxy"
    }
}
else if [type] == "postgres" or [type] =~ /postgres-[a-z]*/ {
    elasticsearch {
        hosts => "${ESURL_PORT_9200_TCP_ADDR}:9200"
        manage_template => false
        index => "db"
    }
}
else if [type] == "bitbucketlog" {
    elasticsearch {
        hosts => "${ESURL_PORT_9200_TCP_ADDR}:9200"
        manage_template => false
        index => "bitbucketlog"
    }
}
else if [type] == "syslog" {
    elasticsearch {
        hosts => "${ESURL_PORT_9200_TCP_ADDR}:9200"
        manage_template => false
        index => "syslog"
    }
}
else {
    elasticsearch {
        hosts => "${ESURL_PORT_9200_TCP_ADDR}:9200"
        manage_template => false
        index => "cloudwatch"
    }
}

When I check it thoroughly, I find that all my logs are getting pushed with _dateparseerror tag. So I have just corrected it and now all the logs are pushing without any error tag.

is this related to the issue that I am facing?

Once I have fixed the dateparseerror of my logs, so far I have not encounter this issue. I think logs with dateparse error was the the problem. Thanks @Badger

Issue is still exist. There are no error logs in logstash logs. It seems that there are no parsing error.
I have observe this issue after 2 days from the day that I have put this logstash in production.

Could you please point me in any other direction.

If you need any other information, please let me know.

Thanks in advance.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.