version :7.10.2
My cluster has a total of 30 machines, one machine has two data nodes, so there are a total of 60 nodes
i add a dataNode to cluster,and then the rebalance tasks started...
but a week has passed,rebalance keeps going now and not only for new dataNode。
Will configuring two nodes on one machine cause some bugs in the cluster?Or other issues。
And i found this situation exists for GET _cat/recovery?v API :
i tried reduce the cluster_concurrent_rebalance and configure rebalance.enable as none for a while,but when i open rebalance ,rebalance tasks continues.....
Hope someone can help me, thank you very much!
Please don't post pictures of text or code. They are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them
[2021-09-14T01:51:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] [master-xxx.xxx.xxx.xxx-9210] triggering scheduled [ML] maintenance tasks
[2021-09-14T01:51:00,023][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [master-xxx.xxx.xxx.xxx-9210] Deleting expired data
[2021-09-14T01:51:00,127][INFO ][o.e.x.m.j.r.UnusedStatsRemover] [master-xxx.xxx.xxx.xxx-9210] Successfully deleted [0] unused stats documents
[2021-09-14T01:51:00,128][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [master-xxx.xxx.xxx.xxx-9210] Completed deletion of expired ML data
[2021-09-14T01:51:00,128][INFO ][o.e.x.m.MlDailyMaintenanceService] [master-xxx.xxx.xxx.xxx-9210] Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask
[2021-09-14T06:05:01,138][INFO ][o.e.x.i.IndexLifecycleTransition] [master-xxx.xxx.xxx.xxx-9210] moving index [metricbeat-7.10.2-2021.09.12-000040] from [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] to [{"phase":"hot","action":"rollover","name":"attempt-rollover"}] in policy [metricbeat]
[2021-09-14T06:05:01,443][INFO ][o.e.c.m.MetadataCreateIndexService] [master-xxx.xxx.xxx.xxx-9210] [metricbeat-7.10.2-2021.09.13-000041] creating index, cause [rollover_index], templates [metricbeat-7.10.2], shards [1]/[1]
[2021-09-14T06:05:03,538][INFO ][o.e.x.i.IndexLifecycleTransition] [master-xxx.xxx.xxx.xxx-9210] moving index [metricbeat-7.10.2-2021.09.13-000041] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [metricbeat]
[2021-09-14T06:05:03,757][INFO ][o.e.c.m.MetadataMappingService] [master-xxx.xxx.xxx.xxx-9210] [metricbeat-7.10.2-2021.09.13-000041/ph9UaWOuSv6dr1LqaLbQlA] update_mapping [_doc]
[2021-09-14T06:05:05,053][INFO ][o.e.x.i.IndexLifecycleTransition] [master-xxx.xxx.xxx.xxx-9210] moving index [metricbeat-7.10.2-2021.09.12-000040] from [{"phase":"hot","action":"rollover","name":"attempt-rollover"}] to [{"phase":"hot","action":"rollover","name":"wait-for-active-shards"}] in policy [metricbeat]
[2021-09-14T06:05:05,271][INFO ][o.e.c.m.MetadataMappingService] [master-xxx.xxx.xxx.xxx-9210] [metricbeat-7.10.2-2021.09.13-000041/ph9UaWOuSv6dr1LqaLbQlA] update_mapping [_doc]
[2021-09-14T06:05:06,831][INFO ][o.e.c.m.MetadataMappingService] [master-xxx.xxx.xxx.xxx-9210] [metricbeat-7.10.2-2021.09.13-000041/ph9UaWOuSv6dr1LqaLbQlA] update_mapping [_doc]
[2021-09-14T06:05:07,195][INFO ][o.e.x.i.IndexLifecycleTransition] [master-xxx.xxx.xxx.xxx-9210] moving index [metricbeat-7.10.2-2021.09.13-000041] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] in policy [metricbeat]
[2021-09-14T06:05:07,326][INFO ][o.e.x.i.IndexLifecycleTransition] [master-xxx.xxx.xxx.xxx-9210] moving index [metricbeat-7.10.2-2021.09.12-000040] from [{"phase":"hot","action":"rollover","name":"wait-for-active-shards"}] to [{"phase":"hot","action":"rollover","name":"update-rollover-lifecycle-date"}] in policy [metricbeat]
[2021-09-14T06:05:07,344][INFO ][o.e.x.i.IndexLifecycleTransition] [master-xxx.xxx.xxx.xxx-9210] moving index [metricbeat-7.10.2-2021.09.12-000040] from [{"phase":"hot","action":"rollover","name":"update-rollover-lifecycle-date"}] to [{"phase":"hot","action":"rollover","name":"set-indexing-complete"}] in policy [metricbeat]
[2021-09-14T06:05:08,339][INFO ][o.e.x.i.IndexLifecycleTransition] [master-xxx.xxx.xxx.xxx-9210] moving index [metricbeat-7.10.2-2021.09.13-000041] from [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] in policy [metricbeat]
[2021-09-14T06:05:08,456][INFO ][o.e.x.i.IndexLifecycleTransition] [master-xxx.xxx.xxx.xxx-9210] moving index [metricbeat-7.10.2-2021.09.12-000040] from [{"phase":"hot","action":"rollover","name":"set-indexing-complete"}] to [{"phase":"hot","action":"complete","name":"complete"}] in policy [metricbeat]
[2021-09-14T06:09:21,953][INFO ][o.e.c.r.a.AllocationService] [master-xxx.xxx.xxx.xxx-9210] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[metricbeat-7.10.2-2021.09.13-000041][0]]]).
This is indeed all the logs, and the newly generated logs are basically similar to the following: [2021-09-14T11:25:05,106][INFO ][o.e.x.i.IndexLifecycleTransition] [master-xxx.xxx.xxx.xxx-9210] moving index [.kibana-event-log-7.10.2-000007] from [{"phase":"hot","action":"unfollow","name":"wait-for-yellow-step"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [kibana-event-log-policy]
That message indicates that the index is moved to a different state, not a different node. You can monitor shard movements using the cat recovery API. What does this show?
This is almost certainly a bad idea. This setting controls how far ahead the balancer looks, but the lookahead isn't perfect so if you set it very high (i.e. 40) it's probably overshooting and then having to back-track. Set it back to 2.
I restarted the current master node of the cluster yesterday
Now that the cluster is balanced
Feels very inexplicable
Thank you very much for answering my questions and helping me
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.