Bufforing logs using ingest node

Hi,
I need to create an Elastic SIEM cluster in which logs will be buffered in the event of a data node failure. When the data node is brought back to life, the logs from the period when the node was not functioning will be pulled into the cluster. Can something like this be achieved using an ingest node? The logs stored in the cluster will be retrieved using both the Elastic Agent and sent via syslog to Logstash, which will parse them. How can I achieve this buffer? Thank you in advance for your response.

No, it can not, the ingest node is an elasticsearch node that is part of the cluster, it will receive the request, process and send them to the data nodes, it can not buffer anything, if for some reason the data cannot be indexed because a data node is offline, the data will be lost.

What you need to do is use a message queue like Kafka, for example, put your logs into Kafka using Logstash, Filebeat or any other tool, and then consume the logs from Kafka using Logstash, Filebeat or any other tool.

The Elastic Agent is pretty limited, I'm not sure if it supports Kafka as an Input or Output, so you will have some issues if you need to have some kind of buffer using the Elastic Agent, but using Logstash and Filebeat is pretty easy to use Kafka to buffer messages.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.