Filebeat can't send logs after Elasticsearch cluster failure

We recently had a problem when ES cluster failed. The problem was resolved, but filebeat failed to send new data after the failure.

Here's a portion of the logs - it seems to retry forever but can't send the data:

    2019-04-08T11:52:04.182+0300    INFO    elasticsearch/client.go:690    Connected to Elasticsearch version 6.4.0
    2019-04-08T11:52:04.185+0300    INFO    template/load.go:73    Template already exists and will not be overwritten.
    2019-04-08T11:52:04.185+0300    INFO    [publish]    pipeline/retry.go:172    retryer: send unwait-signal to consumer
    2019-04-08T11:52:04.185+0300    INFO    [publish]    pipeline/retry.go:174      done
    2019-04-08T11:52:59.058+0300    INFO    [publish]    pipeline/retry.go:149    retryer: send wait signal to consumer
    2019-04-08T11:52:59.058+0300    INFO    [publish]    pipeline/retry.go:151      done
    2019-04-08T11:53:00.065+0300    ERROR    pipeline/output.go:92    Failed to publish events: temporary bulk send failure
    2019-04-08T11:53:00.065+0300    INFO    [publish]    pipeline/retry.go:172    retryer: send unwait-signal to consumer
    2019-04-08T11:53:00.065+0300    INFO    [publish]    pipeline/retry.go:174      done
    2019-04-08T11:53:00.065+0300    INFO    [publish]    pipeline/retry.go:149    retryer: send wait signal to consumer
    2019-04-08T11:53:00.065+0300    INFO    [publish]    pipeline/retry.go:151      done

I restarted Filebeat service and all data was sent to ES without any problem.
Is this a known issue? Filebeat version is quite old, should I update?
I'm running Filebeat 6.3.0 as a service on Windows. Elasticsearch version is 6.4.0.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.