Request Timeout

I just upgraded kibana 6.2.1 to 6.2.2 and everything is going well except I received the error info from the backend console interface. Other than this, the functionality is not impacted.

Unhandled rejection Error: Request Timeout after 30000ms
    at /Users/apple/Desktop/kibana-6.2.2-darwin-x86_64/node_modules/elasticsearch/src/lib/transport.js:342:15
    at Timeout.<anonymous> (/Users/apple/Desktop/kibana-6.2.2-darwin-x86_64/node_modules/elasticsearch/src/lib/transport.js:371:7)
    at ontimeout (timers.js:386:11)
    at tryOnTimeout (timers.js:250:5)
    at Timer.listOnTimeout (timers.js:214:5)

Hi Pororo,

What happens when you send requests like this to elasticsearch
curl -XGET http://localhost:9200/_cluster/health?pretty - do you still get that error message ?
Am thinking this is your java heap issue or shards issue ? How much data do you hold in the cluster? What is the average shard size? Having a large number of small shards can be very inefficient as each shard is associated with some overhead. Querying across such a large number of shards, especially if this will result in disk I/O against slow storage due to the small page cache, can be slow.

Thanks
Rashmi

{
  "cluster_name" : "elasticsearch",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 49,
  "active_shards" : 49,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 13,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 79.03225806451613
}

This is what it showed. Basically, the data is from the json file I imported before and it has 40,000 items in that json file I think.

I am new to ES, so is there any way I can fix this?

Thanks!!

hmm I see your ES is red, we will have to figure that out first . This could be ES related, maybe check the slow logs. It could be network related, maybe DNS if you are using a qualified name. "unassigned_shards" : 13 is not a good sign.

This documentation can provide more info on slow logs https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-slowlog.html#search-slow-log

Cheers
Rashmi

Thank you so much, I will try it out.

Actually, all those problems happened after I upgrade Elastic Stack from 6.2.1 to 6.2.2.

Before, everything was good.

It is code when I restart my ES

[2018-03-19T18:14:41,531][INFO ][o.e.g.GatewayService     ] [mX5UKVk] recovered [44] indices into cluster_state
[2018-03-19T18:14:42,367][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T18:14:42,368][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T18:14:42,368][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T18:14:42,368][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T18:14:42,368][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T18:14:42,368][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_cluster_status], reason [all shards failed]
[2018-03-19T18:14:42,368][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_nodes], reason [all shards failed]
[2018-03-19T18:14:42,369][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_version_mismatch], reason [all shards failed]
[2018-03-19T18:14:42,368][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_xpack_license_expiration], reason [all shards failed]
[2018-03-19T18:14:42,368][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_logstash_version_mismatch], reason [all shards failed]
[2018-03-19T18:14:42,429][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_version_mismatch]
[2018-03-19T18:14:42,429][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_cluster_status]
[2018-03-19T18:14:42,430][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_logstash_version_mismatch]
[2018-03-19T18:14:42,432][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T18:14:42,433][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_kibana_version_mismatch], reason [all shards failed]
[2018-03-19T18:14:42,444][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_kibana_version_mismatch]
[2018-03-19T18:14:43,360][INFO ][o.e.c.r.a.AllocationService] [mX5UKVk] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.monitoring-es-6-2018.02.20][0], [.kibana][0], [.watcher-history-7-2018.02.20][0]] ...]).

Honestly, I used to be hesitated before upgrading. Cuz I knew there would be a bunch of issues like this:(

For now, the health status is

{
"cluster_name": "elasticsearch",
"status": "yellow",
"timed_out": false,
"number_of_nodes": 1,
"number_of_data_nodes": 1,
"active_primary_shards": 52,
"active_shards": 52,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 10,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 83.87096774193549
}

all shards failed is not a good sign. What is your ES_HEAP_SIZE?
There's no way to fix this at startup as something bad has happened and made the data corrupt. And if you delete the shard you lose the data in it. It is possible on your restart some shards were not recovered, causing the cluster to stay red.
If you hit:
http://:9200/_cluster/health/?level=shards you can look for red shards. My solution was to simply delete that index completely.

NOTE - the following is done entirely at your own risk. If you don't have the data elsewhere, or a backup, there is a chance you will lose data. MAKE SURE YOU HAVE A BACKUP.

it seems to me that this is a memory issue related to Elasticsearch and hence the shards are failing. Since it's pretty clear from the stack trace you are facing the error in Elasticsearch too, you should look at it's logs and start debugging from there.

Hope this helps,
Rashmi

What the health of shards showed to me is several indices created by me are yellow.

try out my suggestion of backing up your data , deleting and index and re-trying . Keep a watch on your logs.
Hope this helps

Cheers
Rashmi

The cluster health turns into green now.

But there is still such information shown up.

[2018-03-19T19:31:38,192][INFO ][o.e.g.GatewayService     ] [mX5UKVk] recovered [42] indices into cluster_state
[2018-03-19T19:31:39,240][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T19:31:39,241][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_logstash_version_mismatch], reason [all shards failed]
[2018-03-19T19:31:39,254][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T19:31:39,255][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_version_mismatch], reason [all shards failed]
[2018-03-19T19:31:39,271][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T19:31:39,271][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T19:31:39,272][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_cluster_status], reason [all shards failed]
[2018-03-19T19:31:39,272][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_xpack_license_expiration], reason [all shards failed]
[2018-03-19T19:31:39,276][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T19:31:39,276][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_nodes], reason [all shards failed]
[2018-03-19T19:31:39,303][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_version_mismatch]
[2018-03-19T19:31:39,305][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_logstash_version_mismatch]
[2018-03-19T19:31:39,312][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_cluster_status]
[2018-03-19T19:31:39,323][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_xpack_license_expiration]
[2018-03-19T19:31:39,338][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T19:31:39,339][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_kibana_version_mismatch], reason [all shards failed]
[2018-03-19T19:31:39,354][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_kibana_version_mismatch]
[2018-03-19T19:31:39,996][INFO ][o.e.c.r.a.AllocationService] [mX5UKVk] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.kibana][0]] ...]).

You have a currupted watch somewhere . Try this, delete the watch https://www.elastic.co/guide/en/x-pack/5.1/watch-cluster-status.html and re PUT it again.

https://www.elastic.co/guide/en/x-pack/5.1/watch-cluster-status.html

Hope this helps in clearing the clutter.

Thanks
Rashmi

I tried deleting the watch by DELETE _xpack/watcher/watch/cluster_health_watch

{
  "_id": "cluster_health_watch",
  "_version": 3,
  "found": false
}

And meanwhile the cluster turns into red

{
  "cluster_name": "elasticsearch",
  "status": "red",
  "timed_out": false,
  "number_of_nodes": 1,
  "number_of_data_nodes": 1,
  "active_primary_shards": 42,
  "active_shards": 42,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 3,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 93.33333333333333
}

I believe this is a test cluster and not any prod env. You can safely back up your data and restart the cluster to see if the cluster is green again.

What does the log say ?

Thanks
Rashmi

I restarted it.

The cluster is green now.

The current log still shows

[2018-03-19T20:48:18,695][INFO ][o.e.g.GatewayService     ] [mX5UKVk] recovered [45] indices into cluster_state
[2018-03-19T20:48:19,533][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T20:48:19,533][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T20:48:19,535][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_version_mismatch], reason [all shards failed]
[2018-03-19T20:48:19,535][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_logstash_version_mismatch], reason [all shards failed]
[2018-03-19T20:48:19,538][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T20:48:19,538][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T20:48:19,539][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_xpack_license_expiration], reason [all shards failed]
[2018-03-19T20:48:19,539][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_cluster_status], reason [all shards failed]
[2018-03-19T20:48:19,606][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_cluster_status]
[2018-03-19T20:48:19,606][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_logstash_version_mismatch]
[2018-03-19T20:48:19,608][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T20:48:19,609][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_kibana_version_mismatch], reason [all shards failed]
[2018-03-19T20:48:19,613][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_version_mismatch]
[2018-03-19T20:48:19,619][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T20:48:19,625][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_nodes], reason [all shards failed]
[2018-03-19T20:48:19,634][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_kibana_version_mismatch]
[2018-03-19T20:48:19,660][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_nodes]
[2018-03-19T20:48:20,899][INFO ][o.e.c.r.a.AllocationService] [mX5UKVk] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.monitoring-es-6-2018.02.20][0]] ...]).

Can you please paste the watches you have in your system?

Which command should I use to get the watcher info?

This is what I got now.

.count docs.deleted store.size pri.store.size
green  open   .monitoring-kibana-6-2018.03.13 NGSsCiITS7OlVQGJYLbfmg   1   0        562            0    292.6kb        292.6kb
green  open   .monitoring-kibana-6-2018.03.20 45kQDoHETnuAj1odNA8ktw   1   0         51            0     89.3kb         89.3kb
green  open   .monitoring-es-6-2018.03.15     61f4gpuQSCKVlCfHmjOoww   1   0       2268          280      1.5mb          1.5mb
green  open   .watcher-history-7-2018.02.20   PY1ELg_1RVuhw2LX28b7NQ   1   0       1478            0      1.8mb          1.8mb
green  open   .monitoring-kibana-6-2018.02.24 dbuCESbMRdWwmCJH2S6K4A   1   0       1734            0      600kb          600kb
green  open   .monitoring-es-6-2018.02.22     yLOT7M1fRkCXG1noZ69vcw   1   0      33323          175       18mb           18mb
green  open   .monitoring-kibana-6-2018.02.20 jxNHUF-5S22IrsTnE3jdvw   1   0       1005            0      390kb          390kb
green  open   .triggered_watches              qK8v0B55TDS8lk6WbID46w   1   0         11          200     36.9kb         36.9kb
green  open   .monitoring-kibana-6-2018.03.15 cLXelEoWSKOs9Y6DdB-syw   1   0         44            0    114.6kb        114.6kb
green  open   .monitoring-es-6-2018.03.20     PCcWvtMNQ-an2kYZFyi_Rg   1   0       2638          180      1.6mb          1.6mb
green  open   .watcher-history-7-2018.03.16   wUk-xkiqQ7aPWYGQHfOHsQ   1   0         24            0    244.2kb        244.2kb
green  open   .watcher-history-7-2018.03.19   NJuDVUJxQAad7XaJZ8CHhA   1   0        866            0        1mb            1mb
green  open   .watches                        L1pG4PALQOyEBat3X440oA   1   0          9            0     67.1kb         67.1kb
green  open   .monitoring-kibana-6-2018.02.26 CR7FDJeDQRK7nEjgNltY9Q   1   0        163            0    187.9kb        187.9kb
green  open   .kibana                         kCUspiRaShSCOJuF2Enqdg   1   0         20            2     76.9kb         76.9kb
green  open   .monitoring-es-6-2018.03.16     qL_zoTk-TQ6lUePv0mfTSA   1   0          0            0       261b           261b
green  open   .monitoring-es-6-2018.02.23     8T8LSMRHTJ-VvV8HZiHgKg   1   0     118753          262     65.4mb         65.4mb
green  open   .monitoring-kibana-6-2018.02.22 YR3nV-tpSGiQ1NXCHIhcJg   1   0       1559            0    599.9kb        599.9kb
green  open   .monitoring-kibana-6-2018.02.25 aUMnOCFHS_CeDn1dKpdOvg   1   0       1585            0    602.2kb        602.2kb
green  open   .watcher-history-7-2018.03.15   O-9c__IZQj-YjYo32DMAtA   1   0         76            0      189kb          189kb
green  open   .monitoring-es-6-2018.02.24     FPDHTO-FQFeBVj8QQ8bsRA   1   0      45604           62     25.5mb         25.5mb
green  open   .watcher-history-7-2018.02.26   2Ofc5-n6TtyO6bTh-fk3BA   1   0        241            0    423.8kb        423.8kb
green  open   .monitoring-kibana-6-2018.02.21 mPxdcPOOT0aO68vreA8LVg   1   0        647            0    302.7kb        302.7kb
green  open   .watcher-history-7-2018.02.27   6n8017TvSuuQGIJd_14iKA   1   0        788            0     1007kb         1007kb
green  open   .monitoring-kibana-6-2018.02.23 zUD4bAjgQZGMeVjC4V0fUQ   1   0       4951            0      1.5mb          1.5mb
green  open   .monitoring-es-6-2018.02.25     0yO9VKgiSLenNyk57vvxdw   1   0      46290          136     24.7mb         24.7mb
green  open   .monitoring-es-6-2018.02.27     gVc04o5ZQPW9_BQf7p8kTA   1   0      20415          200     10.2mb         10.2mb
green  open   .monitoring-es-6-2018.03.19     sq3srBoASNqZdDsZtIknSw   1   0      25880          168     12.3mb         12.3mb
green  open   .watcher-history-7-2018.02.23   QOiWf10-SWSIqiqmrB06jw   1   0       6690            0      7.7mb          7.7mb
green  open   .watcher-history-7-2018.02.25   _RvWzdg1QSuwpdcb1Xp62w   1   0       2172            0      2.6mb          2.6mb
green  open   .monitoring-kibana-6-2018.03.16 UgcEUbbCSZqPBTQhihZtqQ   1   0          0            0       261b           261b
green  open   .monitoring-es-6-2018.02.20     C-9-EbCWSeGfN7i_6K_DgQ   1   0      16026           87      7.4mb          7.4mb
green  open   .watcher-history-7-2018.02.24   EXpEnTH4QsqzfbbYnZBShw   1   0       2310            0      2.9mb          2.9mb
green  open   .monitoring-alerts-6            ZZR0euABT66EhJZiEikM7w   1   0         23            4     59.1kb         59.1kb
green  open   .monitoring-es-6-2018.02.21     aM0HicChS7qH3uKpb3eGJw   1   0      11806          192      6.1mb          6.1mb
green  open   .monitoring-es-6-2018.03.13     IB9VZR47SJWdeP7k0n6kRg   1   0      26664          106     12.8mb         12.8mb
green  open   .monitoring-kibana-6-2018.02.27 P4aVfApSS06wWOkmMdZk-A   1   0        550            0    288.5kb        288.5kb
green  open   .watcher-history-7-2018.02.21   nMMyQm9pQ2GBKgJpR79hgw   1   0        868            0        1mb            1mb
green  open   .monitoring-kibana-6-2018.03.19 YevKmOYWS1apTzmP_CX0vw   1   0        473            0    190.7kb        190.7kb
green  open   .security-6                     CNk0yL1BRrixiBa4B__KVA   1   0          4            0     14.6kb         14.6kb
green  open   .reporting-2018.02.18           KA8kmjKiSg2kEu55bVUShQ   1   0          1            0    644.6kb        644.6kb
green  open   .watcher-history-7-2018.03.20   mFswMqRXRY6FWeuEvkJOig   1   0         71            0    283.2kb        283.2kb
green  open   .watcher-history-7-2018.03.13   LzAgQxj8SB2FRt4xYCu63w   1   0       1085            0      1.2mb          1.2mb
green  open   .watcher-history-7-2018.02.22   FqDQaAhFRMK70VIsUAhDsw   1   0       2099            0      2.5mb          2.5mb
green  open   .monitoring-es-6-2018.02.26     -Gfk3K-1SGCjq31QWYCeFA   1   0       5555          259      3.6mb          3.6mb

I would suggest you to read through the docs here:

https://www.elastic.co/guide/en/x-pack/current/managing-watches.html

Thanks
Rashmi

Is this the information that you wanted?

Can you , from the UI delete the disabled watch ( although I don't think thats the cause) and restart your ES and Kibana ? Check if the logs are clean.

Thanks
Rashmi

Just deleted and restarted.

[2018-03-19T21:09:04,897][INFO ][o.e.g.GatewayService     ] [mX5UKVk] recovered [45] indices into cluster_state
[2018-03-19T21:09:05,928][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T21:09:05,930][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_logstash_version_mismatch], reason [all shards failed]
[2018-03-19T21:09:05,954][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T21:09:05,955][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_version_mismatch], reason [all shards failed]
[2018-03-19T21:09:05,970][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T21:09:05,970][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T21:09:05,971][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_cluster_status], reason [all shards failed]
[2018-03-19T21:09:05,972][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_xpack_license_expiration], reason [all shards failed]
[2018-03-19T21:09:05,999][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T21:09:06,000][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_nodes], reason [all shards failed]
[2018-03-19T21:09:06,013][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_version_mismatch]
[2018-03-19T21:09:06,013][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_logstash_version_mismatch]
[2018-03-19T21:09:06,013][DEBUG][o.e.a.s.TransportSearchAction] [mX5UKVk] All shards failed for phase: [query]
[2018-03-19T21:09:06,020][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_cluster_status]
[2018-03-19T21:09:06,022][ERROR][o.e.x.w.i.s.ExecutableSearchInput] [mX5UKVk] failed to execute [search] input for watch [Tq-xHNY_S0qxyMtskMHGiA_kibana_version_mismatch], reason [all shards failed]
[2018-03-19T21:09:06,034][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_elasticsearch_nodes]
[2018-03-19T21:09:06,047][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_kibana_version_mismatch]
[2018-03-19T21:09:06,052][WARN ][o.e.x.w.e.ExecutionService] [mX5UKVk] failed to execute watch [Tq-xHNY_S0qxyMtskMHGiA_xpack_license_expiration]
[2018-03-19T21:09:07,008][INFO ][o.e.c.r.a.AllocationService] [mX5UKVk] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.kibana][0], [.monitoring-es-6-2018.02.20][0]] ...]).