Logstash in Docker and Elastic down

Hello,

I'm running Logstash in Docker. Sometimes it happens that Elasticsearch is unavailable and when I restart the Logstash docker container, Logstash removing messages are they are no longer saved to Elasticsearch.

I would like to ask where does Logstash store the logs that failed to be sent to Elasticsearch (Elasticsearch is not available)?

I would like Logstash to keep the logs even after restarting the Logstash container. When I set up a dead letter queue, messages were not saved there.

Please, what should I set? :slight_smile:

Thank you,
Katerina

Per default logstash uses a memory queue, so if you restart Logstash it will lose the messages.

You need to use persistent queues, to use this with a docker container you will need a volume that persists during container restarts.

This is correct, the dead letter queue only works when it got a response of 400 or 404 from Elasticsearch, if Elasticsearch is down, the dead letter queue won't intercept the message.

From the documentation you have this information:

HTTP request failure. If the HTTP request fails (because Elasticsearch is unreachable or because it returned an HTTP error code), the Elasticsearch output retries the entire request indefinitely. In these scenarios, the dead letter queue has no opportunity to intercept.

Hello,

I didn´t know it. I set up the persistence queue and it works now. :slightly_smiling_face:

Thank you,
Katerina

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.