at present my ES-6.0.0 cluster health is
{"cluster_name":"my_application","status":"yellow","timed_out":false,"number_of_nodes":3,"number_of_data_nodes":3,"active_primary_shards":2541,"active_shards":5029,"relocating_shards":0,"initializing_shards":2,"unassigned_shards":52,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":98.93763525477081}
and still am getting following error logs in ES-6.0.0
[2017-12-12T13:22:23,235][ERROR][o.e.x.w.e.ExecutionService] [node-3] failed to update watch record [_8av4xzfS6iSul7eRNcDfg_kibana_version_mismatch_f8440ab2-f2c8-4231-876d-37236fbb328c-2017-12-12T18:21:53.011Z]
org.elasticsearch.ElasticsearchTimeoutException: java.util.concurrent.TimeoutException: Timeout waiting for task.
at org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:68) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.xpack.watcher.history.HistoryStore.put(HistoryStore.java:100) ~[x-pack-6.0.0.jar:6.0.0]
at org.elasticsearch.xpack.watcher.execution.ExecutionService.execute(ExecutionService.java:333) ~[x-pack-6.0.0.jar:6.0.0]
at org.elasticsearch.xpack.watcher.execution.ExecutionService.lambda$executeAsync$7(ExecutionService.java:416) ~[x-pack-6.0.0.jar:6.0.0]
at org.elasticsearch.xpack.watcher.execution.ExecutionService$WatchExecutionTask.run(ExecutionService.java:568) [x-pack-6.0.0.jar:6.0.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0.jar:6.0.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.util.concurrent.TimeoutException: Timeout waiting for task.
at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:235) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:69) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:66) ~[elasticsearch-6.0.0.jar:6.0.0]
... 8 more
2017-12-12T13:24:19,046][WARN ][o.e.t.TransportService ] [node_1] Received response for a request that has timed out, sent [249159ms] ago, timed out [234159ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-2}{kAHt47RSQ3Kd3ZGHjp_y5Q}{Wet4qto6SgqhP2XAwfDpaw}{192.168.1.60}{192.168.1.60:9254}{ml.max_open_jobs=10, ml.enabled=true}], id [791179]
[2017-12-12T13:24:19,046][WARN ][o.e.t.TransportService ] [node_1] Received response for a request that has timed out, sent [38941ms] ago, timed out [23941ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-2}{kAHt47RSQ3Kd3ZGHjp_y5Q}{Wet4qto6SgqhP2XAwfDpaw}{192.168.1.60}{192.168.1.60:9254}{ml.max_open_jobs=10, ml.enabled=true}], id [791619]
[2017-12-12T13:25:40,116][DEBUG][o.e.a.a.c.n.s.TransportNodesStatsAction] [node_1] failed to execute on node [kAHt47RSQ3Kd3ZGHjp_y5Q]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [node-2][192.168.1.60:9254][cluster:monitor/nodes/stats[n]] request_id [791839] timed out after [15000ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:953) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0.jar:6.0.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
[2017-12-12T13:25:47,041][WARN ][o.e.t.TransportService ] [node_1] Received response for a request that has timed out, sent [442164ms] ago, timed out [427164ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-2}{kAHt47RSQ3Kd3ZGHjp_y5Q}{Wet4qto6SgqhP2XAwfDpaw}{192.168.1.60}{192.168.1.60:9254}{ml.max_open_jobs=10, ml.enabled=true}], id [790959]
[2017-12-12T13:25:47,895][WARN ][o.e.t.TransportService ] [node_1] Received response for a request that has timed out, sent [22779ms] ago, timed out [7779ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-2}{kAHt47RSQ3Kd3ZGHjp_y5Q}{Wet4qto6SgqhP2XAwfDpaw}{192.168.1.60}{192.168.1.60:9254}{ml.max_open_jobs=10, ml.enabled=true}], id [791839]