I have a two node cluster having ElasticSearch version 1.5.2 on which I have installed watcher plugin. After installation, there is nothing mentioned in the logs but still the cluster health appeared red.
Also, if I use this command curl -XGET 'http://localhost:9200/_watcher/stats?pretty' and get the watcher stats, The output is
{
"watcher_state" : "stopped",
"watch_count" : 0,
"execution_thread_pool" : {
"queue_size" : 0,
"max_size" : 0
}
}
The watcher_state is stopped rather than starting.
If the cluster state is red, then it is likely that Watcher can't start too.
If you check the shards cat api (host:9200/_cat/shards), do you see shards of the .watches, .triggered_watches or .watch_history-* indices unassigned?
The first image displays the cluster health and watcher stats.
Second image is the host:9200/_cat/shards. I cannot see the indices unassigned...
Can you grep the shard cat output?
(curl 'localhost:9200/_cat/shards' | grep watch'
Also can you set the watcher logging to debug and restart your nodes and share what you see in the log files?
(add: watcher: DEBUG
under the logger
part in the logging.yml file)
In logs, I am getting the following output.
[2015-07-13 13:08:05,002][DEBUG][watcher ] [Franz Kafka] not starting watcher. because the cluster isn't ready yet to run watcher
And the grep watch image...
So all the shards of .watches
index are unassigned and preventing watcher from starting.
I don't why these shards are unassigned, because there is nothing in logs about this.
If you have your watches available elsewhere you can just remove the .watches
index, wait for watcher to start and then re-instert your watches. Watcher should then be operational.
Yes, deleting .watches index turned the cluster status to green. Now the things are working perfectly.
Thanks for your help.
Btw, I used this command to delete the .watches index curl -XDELETE 'http://localhost:9200/*watches' -v
that curl command invokes the delete index api, so that is good.