When I enable the filebeat.inputs.type syslog to receive sysslog messages over a socket and send it to logstash I get the following error in the filebeat.log:
2020-01-28T11:03:29.279+0100 ERROR logstash/async.go:256 Failed to publish events caused by: read tcp XXXXXXXXX:57892->XXXXXXX:5044: i/o timeout
2020-01-28T11:03:29.280+0100 ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-01-28T11:03:30.594+0100 ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-01-28T11:03:35.296+0100 ERROR pipeline/output.go:100 Failed to connect to failover(backoff(async(tcp://XXXXXXXXX:5044)),backoff(async(tcp://XXXXXXX:5044))): dial tcp XXXXXXX:5044: connect: connection refused
As an result I have the message more than one time in Elasticsearch (not always, only sometimes). Other inputs/plugins are not affected, only when I send an event to the configured Socket.
What I tried already to fix it:
- send output directly to ES: Error is gone, but the Fields don't look like I expect them (funny, using the same ingest as via logstash...) -> no solution for me
- set bulk_max_size in filebeat.yml for logstash output (as recommended here Filebeat throwing i/o timeout while sending logs to logstash)
- disabled filters and elasticsearch output in logstash (didn't help) and direct the output to a file (I have the message repeatedly in the file, same as in elasticsearch).
Any ideas/hints where to look further?