What happens if output (redis, kafka, logstash, ...) is not available?

Hi,

currently I am using metricbeat with file output. Then filebeat is fetching the log and delivering to redis.

I know that metricbeat has redis output capability like filebeat.

So I would like to understand what happens to the system, events, resources if I use redis output directly in metricbeat.

If redis is available, I do not expect any issues.
But what happen, if the output is down?

If logging to file I expect filebeat just tailing a few lines and stop reading if backend is unreachable, so not much pressure on the system. Metricbeat is writing to rotating files. Only if backend is too long down I may loose data.

But what happens with metricbeat and the event generation, if backend is down and metricbeat is configured to push do backend directly?

  • How many events will be stored?
  • Will they be stored in memory or in on disk?
  • Will they get lost on metricbeat restart?
  • If stored on memory, is there a limit or can the host run out of memory because of metricbeat?
  • If there is some limit (memory, event count, etc.) and it gets reached. Will it handle like a ring buffer and oldest events / blocks of events will be overwritten or will metricbeat just stop generating or even crash?

Thanks a lot,
Andreas

Much of the logic you're asking about is configurable, beats uses an internal queue to send events to the output.
Here's the documentation on the queue: https://www.elastic.co/guide/en/beats/metricbeat/current/configuring-internal-queue.html

Most of the outputs also have a "max retries" setting that determines when an event is dropped if it can't be sent. By default, filebeat will ignore this and retry indefinitely.

Here's the redis output documentation on max_retries : https://www.elastic.co/guide/en/beats/metricbeat/7.3/redis-output.html#_literal_max_retries_literal_4

thanks for your reply.

As I understand memory queue and spool queue they get stuck / do not accept new events if they are full.

The file spool queue stores all events in an on disk ring buffer. The spool has a write buffer, which new events are written to. Events written to the spool are forwarded to the outputs, only after the write buffer has been flushed successfully.

The spool waits for the output to acknowledge or drop events. If the spool is full, no new events can be inserted. The spool will block. Space is freed only after a signal from the output has been received.

How does that fit to a ring buffer? My understanding of a ring buffer is, that if it gets full oldest events or a block of x oldest events are deleted and overwritten with new events. This way like log4j is working with rolling file appender.

Asp,

Metricbeat and filebeat both concern themselves with data loss. This is why the file spool queue doesn't overwrite items in the buffer like a 'traditional' ring buffer. Depending on the beat and how you configure it, it is possible for the beat to "block" if it can't send events and its buffers are full. On metricbeat, you do have the max_retries flag you can use to drop events. Filebeat, however, is much more aggressive about data loss, and it will not drop events.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.