I have just seen the new output options for Beats. I would like to know if two parameters will be adding in the future for Redis, they are:
From my point of view, I think it is useful because it is a good way to monitor Redis and avoid bringing down it.
Thanks in advance,
Beats does not support these parameters. With potentially a hundred beats sending to redis, this would actually more of a soft limit.
Setting limits in number of events is not very accurate. You might be off, cause you suddenly send loads of stack-traces (multiline events) to redis taking maybe 10 times more storage then normal events. Or the other way around, you stop sending, besides still having loads of memory available.
Also being a "very soft" limit, the more beats you're running, the higher the chance you will exceed the limit up to OOM.
Instead of trying to fix it in a distributed fashion among a potentially hughe number of clients by limiting number of events of unknown size, the server itself should be protected from OOM if possible. See documented redis.conf. Configuring
maxmemory <bytes> and
maxmemory-policy noeviction, you have much better control about memory usage in redis. Using the
noeviction policy, redis will respond with errors to writes. This will trigger some exponential backoff in beats with every write attempt, until new values could be written to redis.
Thank very much for your answer. One question, Filebeat or any Beat will drop the messages or it will wait?
Thanks in advance,
It depends on the beat. Beats can pass flags to the internal publisher pipeline wether messages are to be dropped or not. Filebeat and Winlogbeat will ensure no data are dropped and attempt to resend or block (corner-case very old non-shipped logs get deleted from disk). Metricbeat and Packetbeat on the other hand might start dropping data.
We're thinking about adding some on-disk queuing to beats, to buffer events in case of outputs being unresponsive or filebeat use-case with files with very fast turn-around. But in general when using queue systems the reading rate must be >= write rate for some given time period with write rate potentially > reading rate if buffering can handle this. Otherwise you will always run into problems. That is, queues should operate in mostly-empty state, not mostly-full.
This topic was automatically closed after 21 days. New replies are no longer allowed.