Filebeat can not publishing events

Howdy,

I'm trying the ELK stack under CentOS 6 machines, following https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7.

Every service is running in a separated machine, so ela01 has Elasticsearch, log has Logstash and filebeat… you know, Filebeat service :smile:

The problem I'm facing is that Filbeat gives below error to me:
2015-12-16T12:21:58+01:00 INFO backoff retry: 4s 2015-12-16T12:22:02+01:00 INFO Error publishing events (retrying): EOF 2015-12-16T12:22:02+01:00 INFO Error publishing events (retrying): read tcp 192.168.28.162:51149->192.168.28.163:5044: read: connection reset by peer 2015-12-16T12:22:02+01:00 INFO send fail

My config looks like below:

Logstash .conf file:

input {
  beats {
    host => "log"
    port => 5044
    type => "logs"
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }    
}

…

filebeat.yml:

filebeat:
  prospectors:
    -
      paths:
        - /input/*.log
      input_type: log
  registry_file: /var/lib/filebeat/registry

output:
  elasticsearch:
    hosts: ["ela01:9200"]
    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

  logstash:
    hosts: ["log:5044"]

shipper:
  geoip:
    paths:
      - "/usr/share/GeoIP/GeoLiteCity.dat"

logging:
  to_files: true
  files:
    path: /var/log/mybeat
    name: mybeat
  level: info

From Logstash server I can connect to Filebeat:

# nc -vz log 5044
Connection to 192.168.28.163 5044 port [tcp/lxi-evntsvc] succeeded!

Any clue about what more I could check?

Commenting out lines regarding to certificates stuff gives the same error message.

Thanks so much!

You've configured the listener on the Logstash side to use TLS but you're not configuring Filebeat to use TLS when connecting to Logstash.

Hi Magnus,

your're right.

Thanks, now Elasticsearch is indexing yet :smile:

In case this is not intentional, I noticed that the configuration has both an elasticsearch output and a logstash output configured.

If you are intending for events to go from Filebeat -> Logstash -> Elasticsearch, then you can remove the elasticsearch section of the configuration and only send events to Logstash. An elasticsearch output will need to be added to you logstash config. There is an example in the Getting Started.

It does make sense to me. In fact my idea was tweaking the config and you helped me.
Thanks for pointing it to me.

Hi,

I am having kind of similar issue and I was going through this thread and I noticed your ''Getting Started" link (which might help me too) doesn't work.

Regards,
Iqbal

Here is the most recent one: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html

Can you please share your filebeat.yml i have similar issue connecting i.o time out configurations to use TLS

input {
beats {
port => 5044
type => "syslog"
#ssl => true
#ssl_certificate => "/etc/pki/tls/certs/filebeat.crt"
#ssl_key => "/etc/pki/tls/private/filebeat.key"
}
}

2016-10-25T15:50:12Z INFO Total non-zero values: libbeat.logstash.published_and_acked_events=3816 filebeat.harvester.closed=7 filebeat.harvester.started=7 libbeat.publisher.published_events=4089 registrar.states.update=2048 registrar.writes=2 libbeat.logstash.call_count.PublishEvents=29 libbeat.logstash.published_but_not_acked_events=14443 libbeat.logstash.publish.read_bytes=108 publish.events=2048 registar.states.current=7 libbeat.logstash.publish.read_errors=27 libbeat.logstash.publish.write_bytes=194818
2016-10-25T15:50:12Z INFO Uptime: 32m20.917862941s
2016-10-25T15:50:12Z INFO filebeat stopped.

filebeat:
  prospectors:
    -
      paths:
        - /path/to/whatever.log
      input_type: log
      document_type: whatever

[…]
   
  registry_file: /var/lib/filebeat/registry

output:
  logstash:
    hosts: ["logstash:5044"]

shipper:
  tags: []

logging:
  to_files: true
  files:
    path: /var/log/filebeat
    name: filebeat
    level: error

Thanks for getting back to me I'm running again in this error not sure [..]

service filebeat start
Starting filebeat: Exiting: error loading config file: yaml: line 21: could not find expected ':'

filebeat:
prospectors:
-
paths:
- /var/.log
- /var/log/.log
- /var/log/messages
- /var/log
- /var/.log
- /opt/.log
- /opt.log
- /opt.log
- /opt/.log

  input_type: log

  document_type: syslog

[…]

registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts: ["10.251.33.130:5044"]

shipper:
tags: []

logging:
to_files: true
files:
path: /var/log/filebeat
name: filebeat
level: error

It looks like a syntax error. Please check it.

PS: I'd suggest to create a new thread instead of using an almost-one-year-ago one.

I am getting a similar kind of error. I am trying to monitor logs from different hosts using filebeats

I get this error on some hosts
2016/10/26 17:28:48.159067 single.go:140: ERR Connecting error publishing events (retrying): read tcp 10.0.1.151:41256->54.214.224.161:5044: i/o timeout
2016/10/26 17:29:17.922310 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.write_bytes=132 libbeat.logstash.publish.read_errors=1

This is happening on some hosts, while I have other hosts which have filebeats running and they are pushing logs to logstash
I have already checked connectivity and that is fine.

my filebeat.yml is as follows
filebeat.prospectors:

  • input_type: log

    paths:

    • /var/log/vdebug
    • /var/log/auth.log
    • /var/log/kern.log
    • /var/log/vsyslog
    • /var/log/nms/vmanage-server.log
      document_type: log

output.logstash:

The Logstash hosts

hosts: ["54.214.224.161:5044"]
bulk_max_size: 1024
ssl:

verification_mode: none

conf file is as follows

ester@elk:/etc/logstash$ more syslog-elasticsearch.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Any help would be appreciated.

Thanks

@admin1, please start a new thread for your unrelated problem.