Are the logs lost if elastic search down

Hello every body,

I have a question about the fate of the logs when the elasticsearch is down. if the logs are lost during the inactivity of elasticsearch, how can we get around this problem?

Best regards,

Elastic and other components can be configured so logs are not lost, but the techniques vary by logging method.

Filebeat, for example is pretty easy, if filebeat can't send events, it will wait until it can, so until the logs are deleted on the sending host, they can still be sent. Logstash has the option of persistent queues, so it can store events (limited on disk space of course).

You will have to design persistent log methods for each specific case. I use logstash (multiple instances) persistent queues for syslog type events but let 'beats wait till they can send relying on the sending hosts to store the events.

1 Like

Hello rugenl,
Thanks for replaying.

After a simple search on the official ELK website I found the limits below for the diffinition of persistent queue logs:

  • Input plugins that do not use a request-response protocol cannot be protected from data loss. For example: tcp, udp, zeromq push+pull, and many other inputs do not have a mechanism to acknowledge receipt to the sender. Plugins such as beats and http, which do have an acknowledgement capability, are well protected by this queue.

what I understood in the case of using syslog for example to send logs this method will not be useful.

Best regerds

We direct syslog to a network load balancer backed by multiple site, multi server logstash with persistent queues. A lot of it is sent UDP, so it doesn't have assured delivery anyway.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.