The security rules and alerts are fantastic in ELK.
Am curious to know, where are the Rules (which dont require Machine Learning) run from? Is it the instance running Kibana or one of the Elastic instances with a specific role?
Just looking at what scaling is necessary for the instance that is running the rules.
The rules and alerts are executed in the Kibana instance.
If you are monitoring your Cluster, you can check the Kibana status in the monitoring.
Also running the following request in Kibana Dev Tools will give you a Capacity Estimation:
Thanks @leandrojmp .
Next question. If you are running allot of rules, it would be good to balance these a cross several kibana instances.
Having several kibana instances would also provide a level of redundancy.
What are peoples thoughts?
Correct, I believe it is typical (atleast in cloud) to size kibana nodes to 4gb memory nodes and then add new nodes.
Yes, but no.
Yes multiple kibana nodes will reduce the impact when one node crashes/is gone. However the tasks will not be picked up by another node untill the timeout is reached. I am not sure if the coordination nodes or master nodes will clear tasks manager in the event of a node failure which if they do will provide some speeding up.
No, it will still result in a delay of rules being executed. If you have a rule executing every x minutes on a kibana node which crashes. The results will not be reported and the task manager will wait for the timeout (or clearing) before another kibana node picks it up. This will cost you a delay and might result in rules being cancelled altogether in favor of the next/new rule execution.
You should absolutely run multiple kibana nodes if your cluster is anything more then a quick couple minute poc. Personally I tend to go with
3+n *4gb depending on the size of my task queue and cluster.
Yes, Kibana itself will tell if you need to scale or not, just run the request shared before.
Only for Kibana, this does not change anything on Elastic side.