Tcp output exception

Hie Everyone,

I have configured pipeline to read logs over tcp plugin. But im feeling tcp output exception every now and than the tcp connection is getting closed.

Error logs:
tcp output exception {:host=>"10.109.0.0", :port=>6626, :exception=>#<Errno::ECONNRESET: Connection reset by peer - No message available>, :backtrace=>["org/jruby/RubyIO.java:3014:in sysread'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-tcp-6.0.2/lib/logstash/outputs/tcp.rb:161:in block in register'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-codec-json_lines-3.1.0/lib/logstash/codecs/json_lines.rb:67:in encode'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/delegator.rb:48:in block in encode'", "org/logstash/instrument/metrics/AbstractSimpleMetricExt.java:65:in time'", "org/logstash/instrument/metrics/AbstractNamespacedMetricExt.java:64:in time'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/delegator.rb:47:in encode'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-tcp-6.0.2/lib/logstash/outputs/tcp.rb:209:in receive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:105:in block in multi_receive'", "org/jruby/RubyArray.java:1821:in each'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:105:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:143:in multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:121:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:300:in block in start_workers'"

And another error:
lb-snat-ai-p-elk-logstash/172.2.3.4 closing (Connection reset by peer)

Need suggestions on why this is seen.

Thanks in Advance.
Regards,
Anusha K

Whatever your tcp output is connecting to is dropping the connection. You should check the logs at that end to see if the reason is logged there.

What have you put for:

  • http.host in logstash.yml
  • host in input tcp?

If you have a restriction, try temporarily with 0.0.0.0

You need to share your configuration.

You said you configured Logstash to receive logs using TCP, which is the TCP input plugin, but the error you shared happens when Logstash is sending something using the TCP output plugin, so you need to provide more context on what is your issue.

1 Like

Hie All,

I have 2 pipelines configured with tcp input plugin over 6625 6632 and 6633 ports.
All the ports reads data and send it to a single output port 6626 over tcp output plugin.
Based on type parsing is taken care accordingly.

shipper 1:

input   {
        tcp     {
                port => 6632
                #codec => "json_lines"
                type => "dscommons_sql_new_logging"
        }
         tcp     {
                port => 6633
                #codec => "json_lines"
                type => "dscommons_project_new_logging"
        }

}

filter {
}
output  {
        if [type] == "dscommons_project_new_logging" or [type] == "dscommons_sql_new_logging"   {
                 tcp {
                        host => "{{logstash_parser}}"
                        port => 6626
                        codec => "json_lines"
                }
        }
}

Shipper 2:

input{
        beats{
                port => 6626
                type => "dscommons"
                client_inactivity_timeout => 3600
        }
}
filter{
}

output{
        if [type] == "dscommons" {
                tcp {
                        host => "{{logstash_parser}}"
                        port => 6626
                        codec => "json_lines"
                }
        }
}

Parser : modify data and store in es

input{
        tcp {
                port => 6626
                codec => "json_lines"
                type => "dscommons"
        }
}
filter{
        if [type] == "dscommons" {
                if [log][file][path] == "/opt/data-science/log/data-access.log" {
                        grok {
                                match => {"message" => "%{DATA:log_timestamp} %{DATA:project} %{DATA:database} %{DATA:log_level} %{DATA:user_id} %{GREEDYDATA:log_message}"}
                        }
                        mutate{
                                add_field => {"beat_version" => "%{[agent][version]}"
                                        "host_name" => "%{[agent][hostname]}"}
                                update => {"type" => "dscommons_sql_logs"}
                                remove_field => ["message","port","input","beat","prospector","offset","fields","host","log"]
                        }
                }else if [log][file][path] == "/opt/data-science/log/project.log" {
                        json {
                                source => "message"
                        }
                        mutate{
                                add_field => {"beat_version" => "%{[agent][version]}"
                                        "host_name" => "%{[agent][hostname]}"}
                                update => {"type" => "dscommons_project_logs"}
                                remove_field => ["port","input","beat","prospector","offset","fields","host","log"]
                        }
                }
        }
        else if [type] == "dscommons_project_new_logging" or [type] == "dscommons_sql_new_logging"    {
                mutate{
                        gsub => ["message", "<14>", ""]
                        gsub => ["message", '\u0000','\n']
                }
                split {
                        field => "message"
                        terminator => "\n"
                }
                json{
                        source => "message"
                }

        }}

output{
        if [type] == "dscommons_sql_logs" or [type] == "dscommons_sql_new_logging" {
                        elasticsearch {
			...........}

So the port 6626 receives logs over 3 different input ports, is that an issue ?

Regards,
Anusha K

shared the configurations

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.