Deployment Architecture Scenarios Using ELK for SIEM at Large Scale on-promise

i want to create some scenarios of deployment of ELK for SIEM usage in a large scale. any help/suggestions about this ?

Hello, @NasrJBr!

For a general starting point, I suggest reviewing the ingest architecture guide from the Elastic docs.

1 Like

@ebeahan Is this documentation new? Didn't know it, it is pretty good.

I think that this one can be updated now that th Elastic Agent can ship directly to Kafka.

1 Like

I deployed a couple and what I always did to help it scale was:

  • Have dedicated masters from the beginning, 3 master nodes will be enough for the majority of the cases.
  • Have data tiers from the beginning
  • Use Kafka as a message queue for data sources with high volume
  • Use Elastic Agent integrations when possible

So I have almost everything sending data to Kafka and then Logstash consuming this data and sending to Elasticsearch, in some cases I have Elastic Agents sending data directly to Elasticsearch.

For network devices, I send the data to some logstash that its only function will be to ship the data to Kafka, this can be replaced with lighter tools like Filebeat or Vector from Datadog.

An example of architecture is this:

  • 5 machines for Elasticsearch, where 3 are master dedicated and 2 are data_hot/data_content/ingest nodes.
  • 1 Machine for Kibana
  • 1 Machine for Fleet server
  • 3 Machines for a Kafka cluster
  • 2 Logstash for receiving data and sending to kafka
  • 2 Logstash for reading from kafka, parsing the data, and sending to Elasticsearch

Of course this can be changed depending on the requirements and budget.

1 Like

This depends entirely on your requirements, for example, if you have just one datacenter, if you have multiple datacenters/locations from where you need to get data, what kind of data you need etc.

On the previous answer I already gave an example of some distribution of machines.

1 Like

Thank you @leandrojmp.