Best specification server

hi guys,

i have a project to make a SIEM using ELK. and it will be separate 5 server. 3 server for Elasticsearch, 1 server for Kibana, and 1 server for Logstash

can you guys help me with the most ideal spec for all these servers?

thanks

All depends on your use case, how much data you expect to come in and how long you need to store it for. Are the servers physical, vm or cloud?

Some of my experience on metrics, windows endpoints with a tuned sysmon config, audit policy and wef generate about 2mb of logs per day each, with an ad infrastructure, exchange etc this on average for all the devices ends up near 7mb per device. More tuning could be had.

Taking windows logs from over 800 devices gives me over 300 eps. I have alot of config files in logstash for parsing and tuning, it has 8 cores and 8 gb ram with 6 for JVM. CPU utilization only goes above a few % when i change a config and logs burst in when it comes back. RAM/JVM could be reduced as its not using anywhere near that.

Kibana is a small instance of about 2gb ram

Front end servers, i have 2 setups. main one has 2 hot servers and 2 warm. The hot are 8gb ssd storage and alot of cores. This keeps around 10 days of data which is approx 80gb then spits to warm.

Having been through this, it is hard to predict. The main setup was originally 4gb and less cores and had to be upped to 8gb.

Firewalls generate alot more logs than endpoints, so that can skew as well.

Hopefully this gives you a starting point but be prepaired to need to turn things up unless you have the kit to chuck alot of resources at. Remember the max JVM is 31/32gb and for ES recommended that this is 50% of RAM

aaah i see thanks anyway, btw how about the logstash's server?

and how many size of harddisk i should prepare for each server?

i gave an example of my logstash spec, 8gb, 6gb of which is jvm and 8 cores. Realistly that is overkill but i have the resources. Disk wise for logstash it can be small. I use about 100gb but again overkill.

As for the ES servers, it boils down to your retention and use case. There are guides on shard sizing, this is a little out of date, the ratios have gotten better - How many shards should I have in my Elasticsearch cluster? | Elastic Blog

An 8gb hot server, 4gb jvm would have a capacity of approx 150gb. A warm server of similar capacity could hold alot more as its not been written to and not searched as frequently.

i have a hundred server that i want monitor wdyt about this spec?

elastic: vcpu: 8 ; ram: 64gb; disk: 500gb
kibana: vcpu: 2; ram: 16gb; disk: 150gb
logstash: vcpu: 8; ram: 16gb; disk: 150gb

That should do you well, tuning the events coming in will make a big difference to eps and retention.
Although Kibana does not need that much RAM, the logstash could get away with half that easy as well. Would be better to have 2 logstash servers of 8gb each.

That should get you a couple of months retention. On the front end elastic the disk setup/performance will be a key factor

Elastic:
Little overkill for 100 servers on the elastic side but will do very well. I have 500+ devices on 3 nodes with each 32GB ram each and have no starvation issues. The disk backend will kill you faster then then memory. At 500Gb you will be VERY surprised how fast that will disappear. As always in storage never exceed 80% utilization. Running winlogbet alone will take all of that in about 3 week if you don't trim out unneeded events for your use case. Use LVM and try and split the volume over several VMDK's/QCOW volumes. If it's physical it has to be SSD and 1 won't survive long "killed a few in dev already". Storage will be your pain point.

Kibana:
4CPU min or your going to hate updates. Memory and storage are fine. Kibana wont use much memory as the default I think is only 1.6~1.8Gb or something low like that. You can set it to 4 but it's very rare that you need that much even running tons of watchers. If you use endpoint in the SIEM it will take a chuck out of it pretty quickly.

Logstash:
@probson is spot on.