More detail about why you recommend running Watcher on a separate monitoring cluster (for larger clusters)?

Hi!

We've been playing with watcher in our dev environment and were just about to deploy it on our main, apx 15 node cluster that hosts 10 TB of data. I just now saw "If you have a larger cluster, we recommend running Watcher on a separate monitoring cluster. If you’re using a separate cluster for Marvel, you can install Watcher on the nodes you’re using to store your Marvel data" here.

Can you give us some more detail about why this is recommended and the pros/cons of each deployment option (separate cluster vs. on the main cluster)? We're currently running elasticsearch 1.7.2, on IAAS VMs in Azure, and plan to upgrade to the latest version (or at least 2.something) within the next few months.

Also, specifically, we plan to use the index action with watches to load data from the watch payload into an index. If we have watcher running on a separate cluster, can we use the index action to load the data from the watch payload back into an index on the external cluster we're watching?

If it would be possible to get this information in the next few hours, that'd be great, becasue we're patching and restarting the cluster tonight, so if we're going to install watcher on the main cluster rather than our marvel cluster, we want to do it along with this patching/restart.

Thanks much in advance!
Casie

Hi Casie,

We have that recommendation for a few reasons. First, it's certainly not required to run it separate, but in larger use-cases (heavily used clusters, or heavily used Watcher cases), it's a good idea.

There are two main reasons - First, watch execution today is managed and occurs on the currently elected master node. The overhead isn't significant for a small number of watches, but the master node is orchestrating the scheduling, and acting as the coordinating node for input queries, so it does use some compute and memory, and we like being defensive about our Master nodes!

A second reason is that we store the watch history on the cluster where Watcher is executing. The watch history has a bunch of detail about each execution of each watch, and it may turn out to be a fair amount of data, depending on how many watches you have and how often they execute. Running watcher on a separate cluster keeps this history data there too, so your production nodes don't have to do indexing for Watcher :slight_smile:

Since a cluster restart isn't something you do every day, one option would be to install Watcher on the production cluster now, but use Shield to limit which users are actually able to create watches on the production cluster. Then you can make your decision at your own pace - though to be clear, I agree with the docs, and I suggest planning to run Watcher on a separate cluster if you're in production.

Thanks,
Steve

1 Like

Thank you for the quick reply! Can you help with this question: If we have watcher running on a separate cluster, can we use the index action to load the data from the watch payload back into an index on the external cluster we're watching? I edited my post to include that question about 30 minutes after the initial post.

Thank you again!

Casie

Not with the index action, though you can use the Webhook action to make an index request to the production cluster!

1 Like