Can't shutdown Elasticsearch when a watch is stuck

Es log:

[2015-10-09 12:29:25,149][WARN ][            ] [crawler_service_001] failed to acquire lock on watch [my-watch] (waited for [30 seconds]). It is possible that for some reason this watch execution is stuck
[2015-10-09 12:37:48,938][INFO ][action.admin.cluster.node.shutdown] [crawler_service_001] [cluster_shutdown]: requested, shutting down in [1s]
[2015-10-09 12:37:49,941][INFO ][watcher                  ] [crawler_service_001] stopping watch service...
[2015-10-09 12:38:49,919][INFO ][node                     ] [crawler_service_001] stopping ...

I put a watch, and I find it stuck, then I try to shutdown the Elasticsearch by:

 curl -XPOST 'http://localhost:9200/_shutdown'

But Elasticsearch can't shutdown, it's just stopping(last more than 10 minutes, from the time I post), what should I do now?

Have you tried using service elasticsearch shutdown?

Sorry but I just don't understand what does service elasticsearch shutdown mean... Could give me something like curl -X '...'? Or tell me some more details?

It's a linux command, which won't apply on Windows.

I googled about service, but I didn't start elasticsearch as service. I just ES_HOME/bin/elasticsearch.

Ok then this doesn't matter.

What's the current status of the watch right now?

I run this before I try to shutdown Elasticsearch:

curl -XGET 'http://localhost:9200/_watcher/stats/_all?emit_stacktraces&pretty'
  "watcher_state" : "started",
  "watch_count" : 0,
  "execution_thread_pool" : {
    "queue_size" : 17,
    "max_size" : 10
  "current_watches" : [ {
    "watch_id" : "my-watch",
    "watch_record_id" : "my-watch_2-2015-10-09T03:32:00.458Z",
    "triggered_time" : "2015-10-09T03:32:00.458Z",
    "execution_time" : "2015-10-09T03:32:00.458Z",
    "execution_phase" : "actions",
    "executed_actions" : [ ],
    "stack_trace" : [ " Method)", "", "", "", "", "", "", "com.sun.mail.util.LineInputStream.readLine(", "com.sun.mail.smtp.SMTPTransport.readServerResponse(", "com.sun.mail.smtp.SMTPTransport.openServer(", "com.sun.mail.smtp.SMTPTransport.protocolConnect(", "javax.mail.Service.connect(", "", "", "", "", "org.elasticsearch.watcher.actions.ActionWrapper.execute(", "org.elasticsearch.watcher.execution.ExecutionService.executeInner(", "org.elasticsearch.watcher.execution.ExecutionService.execute(", "org.elasticsearch.watcher.execution.ExecutionService$", "java.util.concurrent.ThreadPoolExecutor.runWorker(", "java.util.concurrent.ThreadPoolExecutor$", "" ]
  } ],
  "queued_watches" : [ {
    "watch_id" : "my-watch",
    "watch_record_id" : "my-watch_12-2015-10-09T03:42:00.138Z",
    "triggered_time" : "2015-10-09T03:42:00.137Z",
    "execution_time" : "2015-10-09T03:42:00.138Z"
  }, {
    "watch_id" : "my-watch",
    "watch_record_id" : "my-watch_13-2015-10-09T03:43:00.154Z",
    "triggered_time" : "2015-10-09T03:43:00.154Z",
    "execution_time" : "2015-10-09T03:43:00.154Z"
  }, {
    "watch_id" : "my-watch",
    "watch_record_id" : "my-watch_14-2015-10-09T03:44:00.170Z",
    "triggered_time" : "2015-10-09T03:44:00.170Z",
    "execution_time" : "2015-10-09T03:44:00.170Z"
  }, {
    "watch_id" : "my-watch",
    "watch_record_id" : "my-watch_15-2015-10-09T03:45:00.187Z",
    "triggered_time" : "2015-10-09T03:45:00.186Z",
    "execution_time" : "2015-10-09T03:45:00.187Z"
  }, {
    "watch_id" : "my-watch",
    "watch_record_id" : "my-watch_16-2015-10-09T03:46:00.207Z",
    "triggered_time" : "2015-10-09T03:46:00.207Z",
    "execution_time" : "2015-10-09T03:46:00.207Z"
  }, {
    "watch_id" : "my-watch",
    "watch_record_id" : "my-watch_17-2015-10-09T03:47:00.223Z",
    "triggered_time" : "2015-10-09T03:47:00.223Z",
    "execution_time" : "2015-10-09T03:47:00.223Z"
  }, {
    "watch_id" : "my-watch",
    "watch_record_id" : "my-watch_18-2015-10-09T03:48:00.241Z",
    "triggered_time" : "2015-10-09T03:48:00.241Z",
    "execution_time" : "2015-10-09T03:48:00.241Z"
  }, {
    "watch_id" : "my-watch",
    "watch_record_id" : "my-watch_19-2015-10-09T03:49:00.273Z",
    "triggered_time" : "2015-10-09T03:49:00.272Z",
    "execution_time" : "2015-10-09T03:49:00.273Z"
  }, {
    "watch_id" : "my-watch",
    "watch_record_id" : "my-watch_20-2015-10-09T03:50:00.291Z",
    "triggered_time" : "2015-10-09T03:50:00.291Z",
    "execution_time" : "2015-10-09T03:50:00.291Z"
  }, {
    "watch_id" : "my-watch",
    "watch_record_id" : "my-watch_21-2015-10-09T03:51:00.308Z",
    "triggered_time" : "2015-10-09T03:51:00.308Z",
    "execution_time" : "2015-10-09T03:51:00.308Z"
  } ...(delete some queued watches because post length limit exceed)]

but now, I run the same as before, it just get:

curl -XGET 'http://localhost:9200/_watcher/stats/_all?emit_stacktraces&pretty'
curl: (7) couldn't connect to host

It looks like there is a problem with your SMTP setup. What does my-watch look like?

You might need to kill the process to get ES to stop.

public boolean deployWatch() {
        WatchSourceBuilder watchSourceBuilder = WatchSourceBuilders.watchBuilder();
        watchSourceBuilder.trigger(TriggerBuilders.schedule(Schedules.cron("0 0/1 * * * ?")));
        SearchRequest request = Requests.searchRequest("logstash*").source(searchSource()
                .query(filteredQuery(matchQuery("error_code", "*"), boolFilter()
        watchSourceBuilder.input(new SearchInput(request, null));
        watchSourceBuilder.condition(new ScriptCondition(Script.inline(" > 0").build()));
        EmailTemplate.Builder emailBuilder = EmailTemplate.builder();"");
        emailBuilder.subject("Error recently encountered");
        EmailAction.Builder emailActionBuilder = EmailAction.builder(;
        PutWatchResponse putWatchResponse = watcherClient.preparePutWatch("my-watch")
        if (!putWatchResponse.isCreated()) {
            return false;
        return true;

I use java client to put that watch. It get stuck after post two emails.

I used to use kill pid or (if previous not work)kill -9 pid to terminate Elasticsearch, but in the last time, it broke the lucene index of some shard... Since that time, I never use kill pid to shutdown elaticsearch...

Okay, I use kill -9 and restart. Nothing go wrong and my-watch is deleted(I tried force delete before shutdown). But is there any better way than kill -9 to handle things like this?

Not answering directly but FYI I just reported this issue to the watcher project.

Could you give me the link of that issue?

No. It's not a public URL.

Watcher should just shutdown regardless of how your watches are configured. When watchers stops it will not execute any new watches and remove any queued watch executions and then wait unto 30 seconds for the already executed watches to complete. If by then not all watch executions have been completed, watcher will continue to shutdown.

If Watcher gets stuck again, can you share a hot threads output here?
(curl 'localhost:9200/_cluster/nodes/hotthreads'

Okay, I will.

Something similarly occurs:

curl 'localhost:9200/_cluster/nodes/hotthreads'

::: [crawler_service_001][2Uv6aeMXTY2-QU-wPSHT5g][crawlerservice2][inet[/]]
   Hot threads at 2015-10-09T09:42:54.691Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:
    0.0% (116.9micros out of 500ms) cpu usage by thread 'elasticsearch[crawler_service_001][transport_client_timer][T#1]{Hashed wheel timer #1}'
     10/10 snapshots sharing following 5 elements
       java.lang.Thread.sleep(Native Method)

Then I try to shutdown elasticsearch:

curl -XPOST 'http://localhost:9200/_shutdown'


[2015-10-09 17:46:41,904][INFO ][action.admin.cluster.node.shutdown] [crawler_service_001] [cluster_shutdown]: requested, shutting down in [1s]
[2015-10-09 17:46:42,906][INFO ][watcher                  ] [crawler_service_001] stopping watch service...
^C[2015-10-09 17:46:56,950][INFO ][node                     ] [crawler_service_001] stopping ...

ES gets stuck again when watch queue is blocked..

Is that the entire hot threads dump? I would expect it to contain more stack traces of running threads. Can you also share the hot threads dump after you instructed ES to shutdown?

How to get entire hot threads dump? curl 'localhost:9200/_cluster/nodes/hotthreads' is not enough?