Hardware requirement for my server ELK

Hi I am new in ELK and I wonder with what can I start if I have many firewalls and I get 200gb of logs per day.
The deal is ELK is already installed but the server capacity is not enough so we need to resize it and upgrade it
So if someone can help me I want to know which hardware can I use and how much CPU, RAM, and storage do i need For the 200gb of logs per day and more
Thank you very much :smiley:

i found this :

Example: Sizing a small cluster

You might be pulling logs and metrics from some applications, databases, web servers, the network, and other supporting services . Let's assume this pulls in 1GB per day and you need to keep the data 9 months.

You can use 8GB memory per node for this small deployment. Let’s do the math:

  • Total Data (GB) = 1GB x (9 x 30 days) x 2 = 540GB
  • Total Storage (GB) = 540GB x (1+0.15+0.1) = 675GB
  • Total Data Nodes = 675GB disk / 8GB RAM /30 ratio = 3 nodes

source : Benchmarking and sizing your Elasticsearch cluster for logs and metrics | Elastic Blog

and i wonder if this is a good option ?

What is the configuration of your cluster? Which version are you running? Are you using a single-node cluster or you have more nodes in your cluster? For how much time do you need to keep your logs in your cluster?

What is the issue that you are facing that indicates that your server capaity is not enough?

1 Like

Hi sir,
Thank you for your answer. Her is my answer to your question
β€’ What is the configuration of your cluster?
We have two stacks: one for the production and one for the audit we specially need an upgrade in the production stack

β€’ Are you using a single-node cluster or you have more nodes in your cluster?
We have 12 nodes
β€’ For how much time do you need to keep your logs in your cluster?
1 month
β€’ What is the issue that you are facing that indicates that your server capacity is not enough?
Not enough space in the storage server
And for the setup of our infrastructure, we have:

  • 200 AD
  • 80 servers
  • 50 firewalls

Thank you again for the help :smiley:

capacity planning is a bit tricky and can differ for individual use cases .

You are talking about 6-7 TB of data (index size) considering 200GB per day and for 30 days duration .
So based on what kind of SLA you have for your queries and write throughput you can decide on number of replicas and shards.

What are the infra for those 12 nodes ?
CPU , RAM , Heap and Disksize ?

to hold 6-7 TB of data if your storage per node in 2TB then you are good there itself with 12 nodes . But if you can share more details on infra it will be more clear for us to recommend better.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.