I am getting a similar kind of error. I am trying to monitor logs from different hosts using filebeats
I get this error on some hosts
2016/10/26 17:28:48.159067 single.go:140: ERR Connecting error publishing events (retrying): read tcp 10.0.1.151:41256->54.214.224.161:5044: i/o timeout
2016/10/26 17:29:17.922310 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.write_bytes=132 libbeat.logstash.publish.read_errors=1
This is happening on some hosts, while I have other hosts which have filebeats running and they are pushing logs to logstash
I have already checked connectivity and that is fine.
my filebeat.yml is as follows
filebeat.prospectors:
input_type: log
paths:
- /var/log/vdebug
- /var/log/auth.log
- /var/log/kern.log
- /var/log/vsyslog
- /var/log/nms/vmanage-server.log
document_type: log
output.logstash:
# The Logstash hosts
hosts: ["54.214.224.161:5044"]
bulk_max_size: 1024
ssl:
verification_mode: none
conf file is as follows
ester@elk:/etc/logstash$ more syslog-elasticsearch.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Any help would be appreciated.
Thanks