Problem when upgrading from ES-5.4.0 to ES-6.0.0

Hi all.....
here are the steps which i have followed to upgrade from from ES-5.4.0 to ES-6.0.0
Step-1:
i have Disable shard allocation and Stop indexing and perform a synced flush and then i have extracted tar files of ES-6.0.0 and then copied data and config directory into ES-6.0.0 from ES-5.4.0, we have x-pack installed in our ES-5.4.0 cluster so i have installed x-pack in ES-6.0.0

Step-2:
i have changed default x-pack users password in ES-6.0.0
with ./x-pack/setup-passwords interactive

Step-3:
i have started the all nodes in cluster,here cluster is formed but replication shards are in unassigned state, they are assigning too slow ES-6.0.0 took 18hours to assign 1000 shards out of 2500 shards

and i am getting following log in ES-6.0.0

[2017-12-12T08:22:49,084][WARN ][o.e.t.TransportService   ] [node_1] Received response for a request that has timed out, sent [75977ms] ago, timed out [60976ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-2}{kAHt47RSQ3Kd3ZGHjp_y5Q}{Wet4qto6SgqhP2XAwfDpaw}{192.168.1.60}{192.168.1.60:9254}{ml.max_open_jobs=10, ml.enabled=true}], id [681004]
[2017-12-12T08:23:39,058][WARN ][o.e.t.TransportService   ] [node_1] Received response for a request that has timed out, sent [70426ms] ago, timed out [55425ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-2}{kAHt47RSQ3Kd3ZGHjp_y5Q}{Wet4qto6SgqhP2XAwfDpaw}{192.168.1.60}{192.168.1.60:9254}{ml.max_open_jobs=10, ml.enabled=true}], id [681437]
[2017-12-12T08:26:16,045][ERROR][o.e.x.m.c.c.ClusterStatsCollector] [node_1] collector [cluster_stats] timed out when collecting data
[2017-12-12T08:26:26,063][ERROR][o.e.x.m.c.i.IndexStatsCollector] [node_1] collector [index-stats] timed out when collecting data
[2017-12-12T08:26:36,165][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [node_1] collector [index_recovery] timed out when collecting data
[2017-12-12T08:27:04,222][WARN ][o.e.x.w.e.ExecutionService] [node-2] failed to execute watch [_8av4xzfS6iSul7eRNcDfg_elasticsearch_cluster_status]

here i am not sure what is causing issue
please suggest me.

Thank You.

at present my ES-6.0.0 cluster health is

{"cluster_name":"my_application","status":"yellow","timed_out":false,"number_of_nodes":3,"number_of_data_nodes":3,"active_primary_shards":2541,"active_shards":5029,"relocating_shards":0,"initializing_shards":2,"unassigned_shards":52,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":98.93763525477081}

and still am getting following error logs in ES-6.0.0

[2017-12-12T13:22:23,235][ERROR][o.e.x.w.e.ExecutionService] [node-3] failed to update watch record [_8av4xzfS6iSul7eRNcDfg_kibana_version_mismatch_f8440ab2-f2c8-4231-876d-37236fbb328c-2017-12-12T18:21:53.011Z]
org.elasticsearch.ElasticsearchTimeoutException: java.util.concurrent.TimeoutException: Timeout waiting for task.
	at org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:68) ~[elasticsearch-6.0.0.jar:6.0.0]
	at org.elasticsearch.xpack.watcher.history.HistoryStore.put(HistoryStore.java:100) ~[x-pack-6.0.0.jar:6.0.0]
	at org.elasticsearch.xpack.watcher.execution.ExecutionService.execute(ExecutionService.java:333) ~[x-pack-6.0.0.jar:6.0.0]
	at org.elasticsearch.xpack.watcher.execution.ExecutionService.lambda$executeAsync$7(ExecutionService.java:416) ~[x-pack-6.0.0.jar:6.0.0]
	at org.elasticsearch.xpack.watcher.execution.ExecutionService$WatchExecutionTask.run(ExecutionService.java:568) [x-pack-6.0.0.jar:6.0.0]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0.jar:6.0.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.util.concurrent.TimeoutException: Timeout waiting for task.
	at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:235) ~[elasticsearch-6.0.0.jar:6.0.0]
	at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:69) ~[elasticsearch-6.0.0.jar:6.0.0]
	at org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:66) ~[elasticsearch-6.0.0.jar:6.0.0]
	... 8 more
2017-12-12T13:24:19,046][WARN ][o.e.t.TransportService   ] [node_1] Received response for a request that has timed out, sent [249159ms] ago, timed out [234159ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-2}{kAHt47RSQ3Kd3ZGHjp_y5Q}{Wet4qto6SgqhP2XAwfDpaw}{192.168.1.60}{192.168.1.60:9254}{ml.max_open_jobs=10, ml.enabled=true}], id [791179]
[2017-12-12T13:24:19,046][WARN ][o.e.t.TransportService   ] [node_1] Received response for a request that has timed out, sent [38941ms] ago, timed out [23941ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-2}{kAHt47RSQ3Kd3ZGHjp_y5Q}{Wet4qto6SgqhP2XAwfDpaw}{192.168.1.60}{192.168.1.60:9254}{ml.max_open_jobs=10, ml.enabled=true}], id [791619]
[2017-12-12T13:25:40,116][DEBUG][o.e.a.a.c.n.s.TransportNodesStatsAction] [node_1] failed to execute on node [kAHt47RSQ3Kd3ZGHjp_y5Q]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [node-2][192.168.1.60:9254][cluster:monitor/nodes/stats[n]] request_id [791839] timed out after [15000ms]
	at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:953) [elasticsearch-6.0.0.jar:6.0.0]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0.jar:6.0.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
[2017-12-12T13:25:47,041][WARN ][o.e.t.TransportService   ] [node_1] Received response for a request that has timed out, sent [442164ms] ago, timed out [427164ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-2}{kAHt47RSQ3Kd3ZGHjp_y5Q}{Wet4qto6SgqhP2XAwfDpaw}{192.168.1.60}{192.168.1.60:9254}{ml.max_open_jobs=10, ml.enabled=true}], id [790959]
[2017-12-12T13:25:47,895][WARN ][o.e.t.TransportService   ] [node_1] Received response for a request that has timed out, sent [22779ms] ago, timed out [7779ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-2}{kAHt47RSQ3Kd3ZGHjp_y5Q}{Wet4qto6SgqhP2XAwfDpaw}{192.168.1.60}{192.168.1.60:9254}{ml.max_open_jobs=10, ml.enabled=true}], id [791839]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.