Persistent Queues - invitation for beta testing!

Logstash 5.1 introduced an exciting new feature called Persistent Queues (PQs), which provides two main benefits to Logstash users:

  1. Message durability - protection against in-flight data loss. All incoming events will be persisted to disk, so Logstash will be durable across instance failures.
  2. Simplified ingest architectures - adaptive buffering now handles ingest spikes natively in Logstash without the need for a separate queuing layer. The buffering can be capped by max byte size or number of events. For logging use cases, basic Filebeat/Winlogbeat → Logstash → Elasticsearch ingestion flows can be used with the comfort of durability.

Logstash, by default, uses a fixed, in-memory queue between pipeline stages to help facilitate dataflow. By enabling PQs, this internal queue now becomes both disk-based and variable length. Although persisting to disk often comes with a performance cost, our initial benchmarks have shown that the performance impact of enabling PQs is essentially negligible for most real-world use cases. This is more great news for our users!

Please note that this feature is currently in beta and should be deployed in production at your own risk. Users who try out PQs during the beta cycle and report legitimate bugs or feedback in Github will receive a special Elastic gift package as our thank you. We kindly invite everyone to try it out!

For additional details on the feature and how to enable it, please consult the Persistent Queues documentation.

1 Like

Does that mean that there is still only 1 event-pipeline?

The reason I ask, is because in the setup you describe (Filebeat -> logstash -> ES) implies that there could be multiple logging types being sent to logstash. When Logstash processes the messages concurrently on 1 pipeline, every message and its contents needs to be checked for its type, which not only costs resources but also ends up in big configuration.

Yes, only one pipeline today, though you can still certainly use stream identity and conditionals in Logstash to handle different types of logs from Filebeat.

Multi-pipeline support is on the roadmap: https://github.com/elastic/logstash/issues/6521

ok thanks for the information!

Is there any acknowledgment between Logstash and ES that an event has been received and indexed by ES? How do Persistent Queues work with log files?

@fizwit yes, it acks. Starting with the 5.3 release, Logstash will guarantee at-least-once delivery to Elasticsearch with Persistent Queues (PQs) enabled.

For most logging use cases, you should use Filebeat -> Logstash -> Elasticsearch to yield at-least-once guarantees and ensure security across the delivery chain.

Hope this helps.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.