reading the blog's post about persistent queue
This section is not clear for me.
Application / process level crash / interrupted shutdown.
This is the most common cause of potential data loss that PQ helps
solve. This happens when Logstash crashes with an exception or is killed
at the process level or is generally interrupted in a way which
prevents a safe shutdown. This would typically result in data loss when
using in-memory queuing. With PQ, no data will be lost, regardless of
the durability setting (see the queue.checkpoint.writes setting) and any
unacknowledged data will be replayed upon restarting Logstash.
For what I understand, the settings queue.checkpoint.writes is the one that determines when the 'buffered persistent queue' is written to disk.
With the default configuration (1024), the queue will not live in disk until reach that value of events in the input(or the defined queue.checkpoint.interval).
So losing data on a "logstash crash" it will actually depends on the 'durability settings'
Could you confirm it?