Getting these about 20-30 minutes after starting filebeat and ongoing from then on:
Sep 20 15:42:39 silver.smartabase.com filebeat[7615]: WARN beater/filebeat.go:368 Filebeat is unable to load the Ingest Node pipeline...warning.
Sep 20 16:00:10 silver.smartabase.com filebeat[7615]: ERROR logstash/async.go:256 Failed to publish events caused by: EOF
Sep 20 16:00:10 silver.smartabase.com filebeat[7615]: ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
Sep 20 16:00:12 silver.smartabase.com filebeat[7615]: ERROR pipeline/output.go:121 Failed to publish events: client is not connected
Sep 20 16:20:10 silver.smartabase.com filebeat[7615]: ERROR logstash/async.go:256 Failed to publish events caused by: EOF
Sep 20 16:20:10 silver.smartabase.com filebeat[7615]: ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
Sep 20 16:20:11 silver.smartabase.com filebeat[7615]: ERROR pipeline/output.go:121 Failed to publish events: client is not connected
Sep 20 16:50:10 silver.smartabase.com filebeat[7615]: ERROR logstash/async.go:256 Failed to publish events caused by: EOF
Sep 20 16:50:10 silver.smartabase.com filebeat[7615]: ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
Sep 20 16:50:12 silver.smartabase.com filebeat[7615]: ERROR pipeline/output.go:121 Failed to publish events: client is not connected
Running the latest v7 on RHEL 7.7:
filebeat version 7.3.2 (amd64), libbeat 7.3.2 [5b046c5a97fe1e312f22d40a1f05365621aad621 built 2019-09-06 13:49:32 +0000 UTC]
Linux silver 3.10.0-1062.1.1.el7.x86_64 #1 SMP Tue Aug 13 18:39:59 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
I've restarted the service a few times over 2 days, and the results are consistent...
Thank you.
Logstash is managed by logz.io, an external provider.
And the logs are arriving fine at the destination, so it's not a connection issue.
The errors however persist, despite the connection working fine.
Also, a "sister" VM, set up at the same time mechanically via Ansible, with just a few different packages installed given its different usage case, does NOT exhibit this behaviour, which is quite puzzling... filebeat.yml is identical on both VMs...
Can you also check the metrics being written by FIlebeat periodically
If enabled, Filebeat periodically logs its internal metrics that have changed
in the last period. For each metric that changed, the delta from the value at
the beginning of the period is logged. Also, the total values for
all non-zero internal metrics are logged on shutdown. The default is true.
logging.metrics.enabled: true
The period after which to log the internal metrics. The default is 30s.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.