Proactive watchers

Hi All,

Wanted to know if there is a way to make the watcher proactive rather than timer based. Is there a way to achieve the same?

By proactive i mean if there is an error the corresponding watcher should get triggered rather than a timer running every 10 minutes to check the same.

Thanks and Regards,
Anuraag Kamath

Hey,

can you be more specific what you are after? Maybe lay out your use-case a bit further?

What is the difference if you run a watch every few seconds compared to what you are trying from a user/watcher perspective?

--Alex

Hey Alex,

use case is that there are multiple things that we need to monitor egs:the CPU utilization, application errors, infra errors, . So instead of pulling the data is there a push mechanism available which will trigger the watcher instead of the other way round?

Just wanted to explore the possibility of the same. Timer works for us but in case there is this option available we might want to weigh in the pros and cons of using this proactive watcher in our system based on perfomance.

Also each time this runs there is a query invovled and which might be expensive based on our data size. hence from a point of evaluation we wanted to know whether this is possible.

Thanks and Regards,
Anuraag Kamath

Hey,

while watcher itself has the possibility to implement own triggers (in case you have ever wondered why you have to specify trigger in the JSON first, and then the schedule part, the reason is, that the schedule trigger is the only current implementation) - right now you are bound to scheduling queries.

Independently from watcher, you might want to look at the the percolator to achieve something like that.

Hope this helps!

--Alex

Hey Alex thanks for your reply!

Got a basic idea of what a percolator does. Stores the queries and then while searching matches the same. However not sure whether percolator can be used for alerting and if so is there any documentation to refer to or an example?

Again here by alerting i mean whenever a new log comes in and matches an alert to be triggered rather than checking in a fixed interval basis.

Thanks and Regards,
Anuraag Kamath

Hey,

this is exactly what you can do with the percolator. In addition to indexing, you could execute a percolate operation and see if that document matches any of the registered queries.

There are two older blog posts that might help (but dont take the change that percolate action now is a query in 5.0 into account), but it should help with the basic functions.


Hope this helps.

--Alex

Hey Alex thanks understood how it works but below is a scenario.

I set the percolator for metricbeat index with the value > 0.7 i.e more than 70% usage.

Now unless i don't execute the operation i wont know whether the same has occurred. Its only when i execute will i come to know that the usage is more than 70%.

The use case in the above scenario is as soon the CPU Usage is more than 70%, an alert mail should be auto triggered without manual intervention. i.e. without somebody executing the percolator operation, alert mail will notify the user.

Thanks and Regards,
Anuraag Kamath

Hi Alex,
Can you or any of your team member can comment on this so that we can then focus only on watchers for timer based alerts and percolator for saved queries.

Thanks and Regards,
Anuraag Kamath

If you want to use the percolator you have to ensure that it is executed with every index operation and that you send an email based on that execution which has been triggered by you.

Personally I dont think that a single 70% CPU spike is any good measure to trigger an alert though and this is why you should use watcher and aggregations to only let this happen if it stays like that for a period of time. But that is totally up to you of course.

Hope this helps.

--Alex

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.