The security rules and alerts are fantastic in ELK.
Am curious to know, where are the Rules (which dont require Machine Learning) run from? Is it the instance running Kibana or one of the Elastic instances with a specific role?
Just looking at what scaling is necessary for the instance that is running the rules.
Correct, I believe it is typical (atleast in cloud) to size kibana nodes to 4gb memory nodes and then add new nodes.
Yes, but no.
Yes multiple kibana nodes will reduce the impact when one node crashes/is gone. However the tasks will not be picked up by another node untill the timeout is reached. I am not sure if the coordination nodes or master nodes will clear tasks manager in the event of a node failure which if they do will provide some speeding up.
No, it will still result in a delay of rules being executed. If you have a rule executing every x minutes on a kibana node which crashes. The results will not be reported and the task manager will wait for the timeout (or clearing) before another kibana node picks it up. This will cost you a delay and might result in rules being cancelled altogether in favor of the next/new rule execution.
You should absolutely run multiple kibana nodes if your cluster is anything more then a quick couple minute poc. Personally I tend to go with 3+n *4gb depending on the size of my task queue and cluster.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.