Please help me to understand the best way to create the log stoppage alert for critical servers. It should be index specific.
For example: Index "A" contains 3 Firewall device logs- fw1,fw2,fw3
Need to get the email alert if Elasticsearch stop receiving logs from any of the 3 firewall logs for last 10minutes.
Can i use Security>detection>Threshold Rule type for this OR
OBservability>logs>alert ?
The docs show an example similar to what you probably want. For yours, you'd want the condition to be the count of documents over a certain amount of time is zero (or less than you would expect).
If I have 50 servers(50 hostname fields) in the index. For getting alerts if any of the server stop sending logs for last 30minuts. Can I use threshold index alert as mentioned below?
INDEX Index_name
WHEN count()
"GROUPED OVER top 50 'host.hostname'
Condition:
IS BELOW OR EQUALS 0
FOR THE LAST 30 minutes
Ah, I think there will be a problem with this due to the lack of data from these servers, when they are down. Probably the best you can do is have an alert that checks if the count is below some expected level - and that would only alert for a while; after the hostname is no longer logging, the alert will presumably recover. It might be useful, hard to say. You could also look into using the elasticsearch query alerting rule type - if you could fashion a query that would return the info you want.
Have you looked into using Uptime Monitoring for this? Rather than use a general purpose alert, presumably an uptime alert may be more appropriate for this.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.