Wachter enabled on master

Hi All,

In my (licensed) production cluster I have

xpack.watcher.enabled: true

And on my masters I want to disable that. If I set it to false and restart the masters I cannot delete or edit the watchers any more.

Do I need to restart all my ES nodes?

My setup is:
ES 6.2.4,
3 master
4 coordinators
10 ssd data nodes.

Thanks,
Paul.

watches will never execute on master nodes, if you have dedicated data nodes. The executions happens where the .watches shards are. Thus there is no need to change this configuration across your cluster. In more recent versions we also reduce the threadpool to a single thread to not waste resources when watches will not be executed on that node (like master and coordinating-only nodes).

Maybe you can share more of the reasoning or what you are trying to achieve, as I am still missing those bits.

Thanks!

Hi Alexander,

Thanks for the clarification, it is strange while there are no .watches indices on the master node I am unable to do anything with with the watchers in kibana or on the cli (ie: edit, delete).

What I am trying to achieve is quite simple actually, disable (explicitly) xpack.watcher.enabled thus set to false and still be able to edit or delete my watches from kibana so I am unclear what you are missing.

I have currently set the xpack.watcher.enabled to false on the master nodes and I am doing a rolling restart of the cluster to see if that helps..

sorry I was not clear with my question. My question is why do you want to disable that setting, as leaving it enabled does not have any impact on your nodes - with the very notable exception that everything keeps working as expected. You can just leave it as is without having to fear any additional load on your master (with the exception when you use the execute watch API against a master node for example).

I recommend to leave this setting the same on all nodes.

Ah ok... that clear it up :slight_smile: The reason what that I have sometimes a timeout on one of my watches and other are execute twice for some reason. I have a support ticket for this and it was suggested to stop and start the watcher. This worked for some time but I am facing the same problems again. So that lead me to believe that maybe the master nodes where trying to execute the watch and failed on system resources all tough I could not find any evidence for it.

Again, thanks for the information, I will re-enable the setting.

Oh, now we are talking :slight_smile:

I'd highly encourage you to upgrade to 6.4.1. Since 6.2.4 this includes two fixes, that could cause duplicate watch execution or no execution at all. Stopping/starting basically resets the current state, but it could go out of sync again.

The PRs for those fixes are

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.