Beats does not support these parameters. With potentially a hundred beats sending to redis, this would actually more of a soft limit.
Setting limits in number of events is not very accurate. You might be off, cause you suddenly send loads of stack-traces (multiline events) to redis taking maybe 10 times more storage then normal events. Or the other way around, you stop sending, besides still having loads of memory available.
Also being a "very soft" limit, the more beats you're running, the higher the chance you will exceed the limit up to OOM.
Instead of trying to fix it in a distributed fashion among a potentially hughe number of clients by limiting number of events of unknown size, the server itself should be protected from OOM if possible. See documented redis.conf. Configuring
maxmemory <bytes> and
maxmemory-policy noeviction, you have much better control about memory usage in redis. Using the
noeviction policy, redis will respond with errors to writes. This will trigger some exponential backoff in beats with every write attempt, until new values could be written to redis.