We have many indexes, and are consistently adding more to our cluster. We need to know when an index doesn't receive any documents in X number of minutes. For instance, if logs-foo hasn't received any documents in 1 hour, we want an alert to fire. Or if logs-bar hasn't received a document in 1 hour, fire an alert. The issue is we can set a watcher for this for each individual index, but not a generic catch-all. This is not scalable if we have to create a watcher for each index. Is there no way to create a watcher rule that looks at all indexes, and if there has been 0 documents ingested in X minutes in any one index, fire an alert specifying which index has not received documents?
Related Topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Notify if Index Has 0 Logs in X Minutes | 1 | 159 | April 29, 2024 | |
How to monitor indices (doc count, size...) | 8 | 1812 | May 31, 2022 | |
No alert form array results | 14 | 3012 | July 6, 2017 | |
Machine learning - host stopped sending logs or events | 15 | 2321 | September 27, 2017 | |
Alerts when there are no records in a kibana index | 1 | 124 | June 13, 2024 |