Disk spooling with logstash?

We are using the Elastic Stack for our centralized logging. It works great but there is still a lack of knowledge regarding the spool capabilities of logstash shipping those events to elasticsearch. We are running our Elasticsearch instances on differnt servers than our logstash shippers, so a network outage can be a reason for logstash not to be able to ship an event to elasticsearch successfuly.

I have'nt really found informations on how logstash is acting in this situation. Does it spool those events in memory? Or maybe on disk? Is logstash able to spool at all? As this information is quite important for everyone who is using the Elastic Stack, I'm really wondering why there is so little information about this topic. So thanks for Help! :slight_smile:

Logstash does currently not spool to disk, Instead it stops processing, which is often referred to as applying back-pressure, as described here. If you have inputs that can not handle back pressure, e.g. TCP and UDP based inputs, it is common to introduce a buffering mechanism into the pipeline. This can be something as simple as writing to an intermediate file, but if often some type of message queue. Persistent queueing in Logstash is under development, but is as of Logstash 5.0 not yet available.

This should be helpful: https://www.elastic.co/guide/en/logstash/current/deploying-and-scaling.html

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.