ELK production servers configurations

Hi Team,

We are new to ELK stack and we are interested to use ELK in production environment. Now we want to start with few application and based on the performance we will move all our application logs to ELK. We did POC in UAT and its working as expected.

We need your help to configure production servers for below requirement.

application log file size per day - 1 GB
We need to keep 30 days data
planning to configure 3 shards and 1 replica for indices.
Planning to use 3 Elasticsearch servers
1 server for Kibana
1 Server for Log stash
data will come from Kafka server.

is there any calculation for production server configurations based on above requirement. or can someone suggest what are the server configuration we need to use for above requirements ?

We need more info to answer this.
How important is the reliability? How bad is it if you loose the data? How often do you have to access the data? How many people search through the data? How often do you need to move the whole indexes to other computers?

Overall you have very little requirements and compared to other users of ELK this is like a walk in the park. 30GB of data in 30Days is nothing.

If data safety is important there will be no way around a 3 Node Cluster. Any minimum server should be good enough. Give it 8GB of RAM and 500GB SSD or HDD (Depends on your speed needs), a normal modern processor and you are good to go.

If data safety is not to important just use a single node cluster and snapshot the data to another server.

As I said, you have very little requirements and not a lot of data, so not much hardware required.
You can easily run Kibana on the same servers as elastic because you have little traffic anyways

Thank you for your reply.

We need more info to answer this.
How important is the reliability? How bad is it if you loose the data? How often do you have to access the data? How many people search through the data? How often do you need to move the whole indexes to other computers?

Now we are planning to enable for very few applications. SO we will user maximum, 4,5 users and daily 4,5 times for application logs. We will generate graphs and reports ,3,4 times in a day.

Actually no business impact even if it down but we need this data for production support incase if we receive any complaints from client for investigation.

Overall you have very little requirements and compared to other users of ELK this is like a walk in the park. 30GB of data in 30Days is nothing.

NO, now its a for small application but our main aim is to enable ELK for all the application. We have more that 90 GB data per day and we need to keep this data for 30 days.

If data safety is important there will be no way around a 3 Node Cluster. Any minimum server should be good enough. Give it 8GB of RAM and 500GB SSD or HDD (Depends on your speed needs), a normal modern processor and you are good to go.

If data safety is not to important just use a single node cluster and snapshot the data to another server.

As I said, you have very little requirements and not a lot of data, so not much hardware required.
You can easily run Kibana on the same servers as elastic because you have little traffic anyways

OK, Can you suggest me servers configuration if i have 10GB data per day and we need to keep 30 days in server. Data is important and 10 users will use this data for production investigation.

Based on this calculation and performance we will calculate and set up for my main applications( 90GB date per day).

PLease suggest configurations and the number of servers required.

Well what is it now?

HI Defalt,

I want some suggestion for production severs configuration if I index 1GB data everyday and keeping 30 days days. It is critical data and 5 users will access this data everyday.

Any formula to allocate servers configuration for Elasticsearch , Kibana and logstash.

Based on this calculation i will set up the servers for next project where i was talking about 90 GB / day

....

Ok.

Indexes can get really big, you should not have a problem with your data. You want to use a single node which is risky but yea you can do that. 1GB/day is 30GB/month so your index should not get bigger than that. So a 100GB disk would be enough (Can you even buy such small disks nowadays?). As I said you really dont need much for this amount of data. Just use 8-16GB of ram, half of it for elastic and you are fine.

Lets get to the interesting part, 90GB/day.
90GB/day = 2700GB/30days.
Thats pretty big and I would not want to fiddle around with this to much so I would use ILM. 1 Index for every day, after 30days the oldest index gets deleted. You access your data via an alias. You will need around 3TB for 2700GB. So thats how much storage you need.
If you only use 1 Index: Shards should stay below 50GB so you need 54 Shards. Lets say 15Shards per GB so you would still not need more than 4GB of heap, 8GB of ram. As you see your setup does not need much.

I would go for 32GB of Ram, 16GB of heap, 3-4TB SSD, a good CPU like i7 8700k or something like that. Yea, and thats about it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.