SIEM deployment design advise

Hello All,

I am in the process of building of SIEM (Security information and event management ) based on the ELK technology.
For that purpose i would like to carefully design the deployment so it can accommodate to medium to large volume of events feed.

Most of my my sensors are Snort based that send data over syslog , I was reading that there is recommendation when having to deal with large volume data events to use Redis or similar queue management facility.
So my question regarding above information what is the best practice recommended from which volume of events or events per second is recommended of using such queue management ?
In addition what would be best to feed order Syslog >> Redis>>logstash>>elasticsearch or Syslog>>Logstash>>Redis>>Elasticsearch ?

Please advise
Thanks

Most of my my sensors are Snort based that send data over syslog , I was reading that there is recommendation when having to deal with large volume data events to use Redis or similar queue management facility.

Yes, that's frequently used to distribute load across multiple Logstash servers as well as being more resilient against traffic spikes.

So my question regarding above information what is the best practice recommended from which volume of events or events per second is recommended of using such queue management ?

It depends.

In addition what would be best to feed order Syslog >> Redis>>logstash>>elasticsearch or Syslog>>Logstash>>Redis>>Elasticsearch ?

AFAIK Elasticsearch can't get data from Redis (unless it's a recently added ingest node feature) so that option is out. If you can get your syslog daemon to log directly to Redis that sounds like a good idea.

1 Like