Filebeat to redis

I already know that filebeat (LS forwarder) wasn't designed to communicate with redis but only LS, however, i have an HIDS (host intrusion detection) OSSEC, which is currently sending logs to Logstash over UDP(ossec supports UDP only as output) which causes us to loss lots of events, after scalling logstash and elasticsearch, i have put a broker (2 redis instances) to queue my huge events on it and then the LS server pops them out. my question is: is there any solution to make my HIDS communicate with Redis directly or is there a tool that can bind on an ipaddr:port over UDP and sends them to redis???

Note:
I'm currently using a logstash instance as a client on my HIDS server with no filters and output events to redis.
Is it an ideal solution?
Thanks

is there a tool that can bind on an ipaddr:port over UDP and sends them to redis???

Is there anything wrong with Logstash?

1 Like

I'm currently using logstash as a shipper to redis, but i believe that logstash has a limited queue (20 events) and single thread per operation. i have huge amount of events coming to logstash which pushes them to Redis, which makes me a little bit afraid of losing events, is there a directive to increase number of threads on the LS input?. and if yes how can i guarantee that my events are not being lost.
thanks :slightly_smiling:

The udp input has an option to increase the number of threads used, plus the Logstash pipeline itself is threaded (configurable), plus the kernel has buffers for UDP datagrams. I'd look into those options first. When I'm worried about losing UDP datagrams I don't rely on any external system likea broker. I stream the messages to disk and use Filebeat or another Logstash instance to read and ship those files.

1 Like

Thanks for the nice idea, since i'm receiving lots of events, i presumed that it would be an IOPS issue, plus the shipper that ships events over udp to logstash is ossec (HIDS) which doesn't have an option to store those events (syslog_output) on multiple files (250M per file) or either a single file, which means i have to add another broker to do the bidding, moreover, storing events on a single file would be a problem for me when the file gets bigger, so i preferred storing them on one place (Elasticsearch) rather than 2 locations.

Thank you for your time

Kindly, find those options for me and i'll be grateful

i used the udp input workers option, i believe you meant this one :slightly_smiling:
thanks