Error on filebeat -ERR Failed to publish events caused by: read tcp

Hello ,
i am seeing following errors in logs
can someone help

2017-06-16T11:39:30-04:00 INFO Error publishing events (retrying): read tcp 10.140.76.11:37670->10.140.223.89:5044: i/o timeout
2017-06-16T11:39:36-04:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.publish.read_errors=1 libbeat.logstash.publish.write_bytes=7166 libbeat.logstash.published_but_not_acked_events=2048
2017-06-16T11:39:43-04:00 ERR Failed to publish events caused by: write tcp 10.140.76.11:37686->10.140.223.89:5044: write: connection reset by peer
2017-06-16T11:39:43-04:00 INFO Error publishing events (retrying): write tcp 10.140.76.11:37686->10.140.223.89:5044: write: connection reset by peer
2017-06-16T11:40:06-04:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.publish.read_bytes=6 libbeat.logstash.publish.write_bytes=4660 libbeat.logstash.publish.write_errors=1 libbeat.logstash.published_and_acked_events=168 libbeat.logstash.published_but_not_acked_events=1880
2017-06-16T11:40:36-04:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=18 libbeat.logstash.publish.write_bytes=30276
2017-06-16T11:41:03-04:00 ERR Failed to publish events caused by: read tcp 10.140.76.11:37700->10.140.223.89:5044: i/o timeout
2017-06-16T11:41:03-04:00 INFO Error publishing events (retrying): read tcp 10.140.76.11:37700->10.140.223.89:5044: i/o timeout
2017-06-16T11:41:06-04:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.publish.read_errors=1 libbeat.logstash.publish.write_bytes=7427 libbeat.logstash.published_and_acked_events=599 libbeat.logstash.published_but_not_acked_events=1281
2017-06-16T11:41:36-04:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=18 libbeat.logstash.publish.write_bytes=36231
2017-06-16T11:41:40-04:00 ERR Failed to publish events caused by: read tcp 10.140.76.11:37776->10.140.223.89:5044: i/o timeout
2017-06-16T11:41:40-04:00 INFO Error publishing events (retrying): read tcp 10.140.76.11:37776->10.140.223.89:5044: i/o timeout
2017-06-16T11:42:06-04:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.publish.read_errors=1 libbeat.logstash.publish.write_bytes=8811 libbeat.logstash.published_and_acked_events=1013 libbeat.logstash.published_but_not_acked_events=268
2017-06-16T11:42:11-04:00 ERR Failed to publish events caused by: read tcp 10.140.76.11:37818->10.140.223.89:5044: i/o timeout
2017-06-16T11:42:11-04:00 INFO Error publishing events (retrying): read tcp 10.140.76.11:37818->10.140.223.89:5044: i/o timeout
2017-06-16T11:42:36-04:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=1 libbeat.l

Thanks.

Looks like your Logstash is resetting the connection. Is there a problem with Logstash? Congestion with one of the outputs in Logstash? Have you looked at your Logstash logs and Elasticsearch logs?

For test purpose i am loading config file under directory /usr/share/logstash

bin/logstash -f test9.conf --config.test_and_exit
bin/logstash -f test9.conf --config.reload.automatic

root@syslogstash1:/usr/share/logstash# more /etc/issue
Ubuntu 14.04.2 LTS \n \l

Also this is what i am seeing
syslogstash1:/var/log/logstash# ls -ltr
total 52
-rw-r--r-- 1 logstash logstash 16178 May 19 12:00 logstash-plain-2017-05-19.log
-rw-r--r-- 1 logstash logstash 31248 May 24 15:26 logstash-plain-2017-05-24.log
-rw-r--r-- 1 logstash logstash 2493 Jun 1 08:43 logstash-plain.log
root@lvsyslogstash1:/var/log/logstash# more logstash-plain.log
[2017-06-01T08:43:32,692][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>,
:added=>[http://dfdevelkserver1.df.jabodo.com:9200/]}}
[2017-06-01T08:43:32,701][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection
is working {:healthcheck_url=>http://dfdevelkserver1.df.jabodo.com:9200/, :path=>"/"}
[2017-06-01T08:43:32,922][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x4a
55eabb URL:http://dfdevelkserver1.df.jabodo.com:9200/>}
[2017-06-01T08:43:32,923][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-06-01T08:43:33,026][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"templa
te"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"ena
bled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"stri
ng", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"
", "match_mapping_type"=>"string", "mappi
ng"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=

"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "pr
operties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"typ
e"=>"half_float"}}}}}}}}
[2017-06-01T08:43:33,096][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::Elas
ticSearch", :hosts=>[#<URI::Generic:0x5a4d7928 URL://dfdevelkserver1.df.jabodo.com:9200>]}
[2017-06-01T08:43:33,148][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>6, "pipeli
ne.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>750}
[2017-06-01T08:43:33,552][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"
}
[2017-06-01T08:43:33,584][INFO ][logstash.pipeline ] Pipeline main started
[2017-06-01T08:43:33,630][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-06-01T08:43:44,164][WARN ][logstash.runner ] SIGTERM received. Shutting down the agent.
[2017-06-01T08:43:44,176][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
root@lvsyslogstash1:/var/log/logstash#

Also at logstash server

root@syslogstash1:/var/log/logstash# netstat -ptlna | grep :5044
tcp 0 0 0.0.0.0:5044 0.0.0.0:* LISTEN 21895/java
tcp 0 0 10.140.223.89:5044 10.140.76.11:46674 ESTABLISHED 21895/java

What's in the Logstash config your are using?

I am using following config

input {
    beats {
    port => 5044
    codec => multiline {
      pattern => "^\[%{TIMESTAMP_ISO8601}\]"
      negate => true
      what => previous
     }
    }
}
filter {
     grok {
    match => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp_match}\]%{SPACE}\:\|\:%{SPACE}%{WORD:level}%{SPACE}\:\|\:%{SPACE}%{USERNAME:hostname}%{SPACE}\:\|\:%{SPAC}%{GREEDYDATA:coidkey}%{SPACE}\:\|\:%{SPACE}%{GREEDYDATA:clientinfo}%{SPACE}\:\|\:%{SPACE}%{GREEDYDATA:clientip}%{SPACE}\:\|\:%{SPACE}%{GREEDYDATA:Url}%{SPACE}\:\|\:
%{SPACE}%{JAVACLASS:class}%{SPACE}\:\|\:%{SPACE}%{USER:ident}%{SPACE}%{GREEDYDATA:msg}"}
   }
}
output {
    stdout { codec => rubydebug }

  if "_grokparsefailure" in [tags] {
    # write events that didn't match to a file
    file { "path" => "/tmp/grok_failures.txt" }
  } else{
     elasticsearch {
       hosts => "dfsyselastic.df.jabodo.com:9200"
       user => "fluentd"
       password => "c6G1bMZdesCDgGsgaiKq"
       index => "vicinio-%{+YYYY.MM.dd}"
       document_type => "log"
     }
   }
}

It parsing some log lines but not all i guess . I see data coming to elastic search and kibana

hello Andrew ....can you please guide through the issue

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.