I'm looking for advices for a new cluster. I would like to manage my firewalls logs with an ES cluster.
I have five sites in five separate cities.
So, my idea is to install an ELK stack on each site in order to collect the syslog logs from the local firewalls. By doing this, I'm sure that logs are collected even if a link between two sites is broken.
These five stacks will be my five nodes. So every node will be a "data node".
Originally my idea was to create an index every day. But after thinking, I think there will be too many shards in my cluster. For example, for one day and if I keep the default settings, there will be 5 indices, each divided in 5 primary shards and 5 replica shards. So 5*5 = 25 primaries shards and 25 more replicas shards ==> 50 shards per day.
What do you think of it ?
In your point of you, what is the best in terms of reliability : create a unique index ? or create daily indices ?
In fact, I would like to used Curator for deleting the indices older than 7 days. Is it possible to delete documents older than 7 days ? I didn't find anything.
Moreover, what is for you the best number of shards ? or how can I calculate it. Is there a ratio between number of nodes and number of shards ?
Thank in advance,
I'm waiting for your precious advices.
Is your cluster spread across different data centers?
Because as per your description, there will be an ELK stack on each site.
So question, are you going to cluster all of these sites together?
Instead of that, you can port all of your logs to a central place where you will have only one instance of ELK.
You can access it from any of the site as it would be inside your network.
That means, even if you go ahead with default configuration, there will be 1 * 5 - 5 shards per day.
If you want to cluster your elastic search, then it is susceptible to split brain and clustering across data centers is not advisable.
You can create an index per day for all of the logs. And in all documents you can add a field for site for example you can make use of a field _type; you can do sorting, aggregations on this field.
What is your use case? How much data do you expect? If it is a small l setup then
i. You create index on daily basis and keep no of shards per index as 1/2.
ii. You can have a replica for it.
If you need to centralize all firewall logs to 1 location like headquarters, why don't you have firewall logs written to local storage (files) and use Filebeat or nxlog to read and forward events to a central ELK. With that, a broken link among sites will be ok.
Do you need to correlate events among all firewalls?
Do you want to or have the hardware needed to run 5 separate ES clusters in 5 sites? That leads to more administration overhead.
About sharding, for time series events, I would go for either a daily or weekly index with 1 primary shard and 1 replica first. We don't want too many unnecessary shards in a cluster, and also don't want the size of a shard to go over 50 GB as some ES supporter recommended. I would personally keep a shard size under 30 GB. In addition, if you have an ES cluster with 1 or 2 data nodes, having more shards than the number of nodes won't improve performance.
For instance, I have 4 x 30 GB heap size ES instances, and for a daily index of 100 GB, I set 2 x 2 shards
My first idea was to create independant data-nodes one for each site. I don't want to centralize them on one site.
So yes, I have the hardware for running 5 separate ELK stacks in 5 different sites.
After that, I will implement centralized dashboards with grafana or something like that.
To give you an idea, each day I get max 3Go of log for one firewall. It's not a lot of data. So for one site it represents around 25 Go /week.
In any case, all the raw logs are securely kept in an other server.
After reading both of your answers, maybe something like that could be a good configuration (for one site) :
A weekly index with 1 primary shard and 1 replica shard.
In the cluster, there will be 5* (1+1) = 10 shards per week.
Do you think it is a good setup ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.