Filebeat EOF - And then succeeds at second try

(Christian Nørskov) #1


I'm having some issues with filebeats and an error message. (Almost) everytime my filebeat instances tries to send data to logstash, the following logs are written. So it looks like it fails first try to send to logstash, but always succeeds at second try.

11/4/2016 2:57:16 PM2016/11/04 13:57:16.671410 single.go:76: INFO Error publishing events (retrying): EOF
11/4/2016 2:57:16 PM2016/11/04 13:57:16.671437 single.go:152: INFO send fail
11/4/2016 2:57:16 PM2016/11/04 13:57:16.671448 single.go:159: INFO backoff retry: 1s
11/4/2016 2:57:17 PM2016/11/04 13:57:17.698679 publish.go:104: INFO Events sent: 5
11/4/2016 2:57:17 PM2016/11/04 13:57:17.698869 registrar.go:163: INFO Registry file updated. 1 states written.

I've set the congestion_threshold in logstash to 60 seconds, to see if this was the problem, but it didn't help. Neither logstash or filebeat are under any heavy load (about 300kb of logs are created every hour in this test setup).

Notes: Applications are deployed in docker containers in a rancher environment, where filebeat is a sidekick to each application, reading that applications logs. Logstash is also deployed in a container, but kafka is running directly on the hosts (and are accessible to logstash).

Of course this isn't the biggest issue, as logs are succesfully send, but I would rather they were send on first try, so the filebeat logs does not get filled with data, that could potentially overshadow actual problems.

I hope you guys can help.



Logstash config:

input {
beats {
port => 5044
congestion_threshold => 60
output {
kafka {
client_id => "client_id"
topic_id => "kafka_topic"
bootstrap_servers => "kafka_servers"
codec => plain {
format => "%{source} => %{message}"

Filebeat conig:


List of prospectors to fetch data.

# Each - is a prospector. Below are the prospector specific configurations
- paths:
- "${LOG_PATTERN:/var/log/**/*.log}"
to_files: true
level: INFO
name: filebeat.log
path: /tmp
keepfiles: 1

Should also note, that there is nothing in the logstash logs besides:

11/4/2016 1:38:09 PMSettings: Default pipeline workers: 2
11/4/2016 1:38:09 PMPipeline main started

Filebeat sends logs, but only at the second try
(Steffen Siering) #2

which filebeat/logstash versions are you using? Did you consider upgrading to 5.0 (the congestion_threshold is been replaced in newer beats input plugin in logstash).

Anything in logstash logs? EOF means the TCP connection filebeat is using has been closed by some remote.

(system) #3

This topic was automatically closed after 21 days. New replies are no longer allowed.