Persistent queue not fully emptied after ES output unavailable

Hi,

I have a setup with a JMS input which subs to a durable topic on a message broker.
Events contain log data in JSON format, so in my filters, I have a json, split and a bit of ruby code.
The single output is ES.
At the moment, I'm running test where I push 5K messages in the broker, each message contain 50 log lines, for a jms input of 5K message, I'm outputing 250K documents in ES.

As the log data is expected to be kept for long time usage, I activated the queue persistence to avoid data loss when ES is down / unavailable.
Starting from a situation where everything is up but there is no broker activity, I shutdown ES, let Logstash complain a bit about that, then push my 5K messages on the broker.
I can clearly see the pages being created on disk and the message disappearing from the broker's topic and when I start ES again, documents are correctly indexed... but if there is no other activity on the broker, there are always some message stuck in the persisted queue.

I know they are there because :

  • I don't reach the full 250K amount of indexed documents
  • the queue page size on disk is a few MB
  • if I restart logstash, the queue will end up being drained (after the restart and not during the stop - since queue.drained is still set to false I suppose)

What's even more puzzling is that even if I don't forcefully interrupt ElasticSearch, on a "clean state" pipeline, if I push another batch of 5K jms messages (reminder: it's supposed to generate 5K*50 = 250K documents), at most 40K documents will be indexed.
Which means at most 800 jms messages have been fully acked and processed and the rest is stucked in the ~50MB queue on disk.

Am I missing something ?
Have I totally misunderstood what the persistence queue is about ?

PS: I haven't changed any pipeline or queue settings except queue.type to persisted.

Regards,
R.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.