Failed to publish events caused by: EOF,Failed to publish events caused by: client is not connected,Failed to publish events: client is not connected

Hello everyone,
i have a very simple filebeat which send some simple log file to logstash which is running on another server,,after i start filebeat and logstash the log which i see from filebeat shows me the connection is established on the port i explicity defined (5060)...the firewall between these two servers are configured and telnet shows me there no problem for these two servers to communicate on that port..but i got these problems on my logs:

Aug 13 18:28:36 my_server filebeat[26429]: 2019-08-13T18:28:36.524+0430 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://x.x.x.x:5060))
Aug 13 18:28:36 my_server filebeat[26429]: 2019-08-13T18:28:36.525+0430 INFO pipeline/output.go:105 Connection to backoff(async(tcp://x.x.x.x:5060)) established
.
.
.
.
.
.

ERROR logstash/async.go:256 Failed to publish events caused by: EOF
Aug 13 18:29:36 my_server filebeat[26429]: 2019-08-13T18:29:36.599+0430 ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
Aug 13 18:29:38 my_server filebeat[26429]: 2019-08-13T18:29:38.492+0430 ERROR pipeline/output.go:121 Failed to publish events: client is not connected
Aug 13 18:29:38 my_server filebeat[26429]: 2019-08-13T18:29:38.493+0430 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://x.x.x.x:5060))
Aug 13 18:29:38 my_server filebeat[26429]: 2019-08-13T18:29:38.493+0430 INFO pipeline/output.go:105 Connection to backoff(async(tcp://x.x.x.x:5060)) established
Aug 13 18:29:40 my_server systemd[1]: Stopping Filebeat sends log files to Logstash or directly to Elasticsearch....
Aug 13 18:29:40 my_server filebeat[26429]: 2019-08-13T18:29:40.878+0430 INFO beater/filebeat.go:443 Stopping filebeat
Aug 13 18:29:40 my_server filebeat[26429]: 2019-08-13T18:29:40.878+0430 INFO crawler/crawler.go:139 Stopping Crawler
Aug 13 18:29:40 my_server filebeat[26429]: 2019-08-13T18:29:40.878+0430 INFO crawler/crawler.go:149 Stopping 1 inputs
Aug 13 18:29:40 my_server filebeat[26429]: 2019-08-13T18:29:40.878+0430 INFO cfgfile/reload.go:229 Dynamic config reloader stopped
Aug 13 18:29:40 my_server filebeat[26429]: 2019-08-13T18:29:40.878+0430 INFO input/input.go:149 input ticker stopped
Aug 13 18:29:40 my_server filebeat[26429]: 2019-08-13T18:29:40.878+0430 INFO input/input.go:167 Stopping Input: 6579502740723303628
Aug 13 18:29:40 my_server filebeat[26429]: 2019-08-13T18:29:40.878+0430 INFO log/harvester.go:274 Reader was closed: /etc/filebeat/somelog.log. Closing.
Aug 13 18:29:40 my_server filebeat[26429]: 2019-08-13T18:29:40.878+0430 INFO crawler/crawler.go:165 Crawler stopped
Aug 13 18:29:40 my_server filebeat[26429]: 2019-08-13T18:29:40.878+0430 INFO registrar/registrar.go:367 Stopping Registrar
Aug 13 18:29:40 my_server filebeat[26429]: 2019-08-13T18:29:40.879+0430 INFO registrar/registrar.go:293 Ending Registrar

My filebeat yaml file in SERVER A

filebeat.inputs:

  • type: log
    paths:
    • /etc/filebeat/somelog.log
      output.logstash:
      hosts: ["X.X.X.X:5060"]
      ssl:
      enabled: false
      timeout: 500

My logstash conf file in SERVER B

input {
beats {
port => "5060"
ssl => false
}
}

filter {
}

output {
elasticsearch {
hosts => ["localhost:9200"]
index => "testing"
}
}

please consider i have no SSL cert configured yet and i tried to disabled (i do not know by defaults is active or not)

Looks like the connection is closed by Logstash, which force closes idle connections. See: client_inactivity_timeout setting.

The logstash output uses pipelining and is mostly asynchronous. This is why someone sometimes one sees followup errors.

No events will be lost due to this in filebeat. Filebeat will just reconnect and continue sending once it was able to do so.

Thx Alot for your reply..actually the problem was about network

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.