Filebeat v Logstash handling when Elastic endpoint is down

Quick question on how both filebeat and logstash handle data if Elasticsearch is down.

In this scenario both would be pulling from a Kafka topic. We are wondering how the two operate if the endpoint is down but data is still being dumped into the Kafka topic for them to read. Is the data cached until the Elastic cluster is back up? If so, how do you manage the cache? Do they just stop reading from the topic in general until the Elastic cluster is back up?


For logstash, that is correct. In-memory queueing and back pressure is covered here. I cannot speak to filebeat.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.