Ability to stop and start a cluster without shard movement


(Anton Alfred) #1

Continuing the discussion on why the shard movement happens though the design is correct to not to have movement and to pick local shards.

Original Issue that was closed till the discussion is over:
Ability to stop and start a cluster without shard movement

Shard Movement When each nodes had different number of shards

Log files from the ES master node has been attached here.

§ Output of below commands before shutting down ES nodes, can be found in the mentioned text files which are part of zipped file.

curl -XGET 'elasticsearch1:9200/_stats?level=shards' > before-shutdown-stats-shards.txt
curl -XGET 'elasticsearch1:9200/_shard_stores?status=green,yellow,red' > before-shutdown-shard_stores-status.txt

§ Output of below commands after reboot of ES nodes but before re-enabling allocation, can be found in the mentioned text files which are also part of zipped file.

curl -XGET 'elasticsearch1:9200/_stats?level=shards' > after-shutdown-stats-shards.txt
curl -XGET 'elasticsearch1:9200/_shard_stores?status=green,yellow,red' > after-shutdown-shard_stores-status.txt

Observation:
· There were no shard movement during the allocation.

[root@elasticsearch7 elasticsearch]# curl -XGET 'elasticsearch2:9200/_cluster/health?pretty'
{
"cluster_name" : "elastic-search-cluster",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 7,
"number_of_data_nodes" : 7,
"active_primary_shards" : 716,
"active_shards" : 1431,
"relocating_shards" : 0,
"initializing_shards" : 1,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 99.93016759776536
}
· However there were 2 shard reallocation after cluster was “green” and 100% active shards, to balance the number of shards per nodes.

[root@elasticsearch7 elasticsearch]# curl -XGET 'elasticsearch2:9200/_cluster/health?pretty'
{
"cluster_name" : "elastic-search-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 7,
"number_of_data_nodes" : 7,
"active_primary_shards" : 716,
"active_shards" : 1432,
"relocating_shards" : 2,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

Shard Movement when each node had equal number of shards

  1.   We tested ES cluster stop and start again with the equal number of shards (202) in all the 7 nodes.
    

[root@elasticsearch5 elasticsearch]# curl 'elasticsearch3:9200/_cat/allocation?v'
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
202 57.5gb 65.1gb 82.3gb 147.5gb 44 10.0.0.26 10.0.0.26 idNEX8h
202 57gb 64.6gb 82.8gb 147.5gb 43 10.0.0.23 10.0.0.23 lURUCVH
202 57.6gb 65.2gb 82.2gb 147.5gb 44 10.0.0.27 10.0.0.27 wht1qYx
202 57.3gb 64.9gb 82.5gb 147.5gb 44 10.0.0.18 10.0.0.18 zlHb2gq
202 57.1gb 64.7gb 82.7gb 147.5gb 43 10.0.0.25 10.0.0.25 7sNEZsb
202 57.9gb 65.5gb 81.9gb 147.5gb 44 10.0.0.21 10.0.0.21 70eRJrE
202 57gb 64.6gb 82.8gb 147.5gb 43 10.0.0.17 10.0.0.17 Y0EJjkY

  1.   Below is the output after disabling the allocation but before shutting down nodes. Number of shards is not even in 7 ES nodes and also we can see 707 shards unassigned.
    

shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
152 42.4gb 64.6gb 82.8gb 147.5gb 43 10.0.0.20 10.0.0.20 Y0EJjkY
119 45.7gb 65.5gb 81.9gb 147.5gb 44 10.0.0.17 10.0.0.17 70eRJrE
90 13.2gb 64.7gb 82.7gb 147.5gb 43 10.0.0.21 10.0.0.21 7sNEZsb
139 43.7gb 64.9gb 82.5gb 147.5gb 44 10.0.0.18 10.0.0.18 zlHb2gq
0 0b 65.2gb 82.2gb 147.5gb 44 10.0.0.23 10.0.0.23 wht1qYx
55 8gb 64.7gb 82.7gb 147.5gb 43 10.0.0.22 10.0.0.22 lURUCVH
152 47.7gb 65.1gb 82.3gb 147.5gb 44 10.0.0.19 10.0.0.19 idNEX8h
707 UNASSIGNED

[root@elasticsearch7 share]# curl -XGET 'elasticsearch2:9200/_cluster/health?pretty'
{
"cluster_name" : "elastic-search-cluster",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 7,
"number_of_data_nodes" : 7,
"active_primary_shards" : 707,
"active_shards" : 707,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 707,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0
}

  1.   Below is the output after enabling the allocation but after restarting the nodes. We can see the relocation of shards happening to balance the number of shards in the cluster nodes.
    

Even though the number of shards initially were even in all 7 nodes, relocation of shards happened after the cluster turning “green” which is not in our control.

[root@elasticsearch7 share]# curl -XGET 'elasticsearch2:9200/_cluster/health?pretty'
{
"cluster_name" : "elastic-search-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 7,
"number_of_data_nodes" : 7,
"active_primary_shards" : 707,
"active_shards" : 1414,
"relocating_shards" : 2,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

Seems there is some confusion with developers in Github and here. I am not able to upload a Zip or tar file, so have uploaded in the original issue in github.


Ability to stop and start a cluster without shard movement part2
(David Turner) #2

The logs you uploaded do not contain the output you claim, and the logs seem to be corrupted:

$ ls -al es-logs.zip
-rw-r--r--@ 1 davidturner  staff  2466051  4 Jun 09:12 es-logs.zip
$ shasum es-logs.zip
72524762a3462b95226efb8ea9d9e302c5295db4  es-logs.zip
$ unzip es-logs.zip
Archive:  es-logs.zip
   creating: es-logs/
  inflating: es-logs/before-sutdown-stats-shards.txt
   creating: __MACOSX/
   creating: __MACOSX/es-logs/
  inflating: __MACOSX/es-logs/._before-sutdown-stats-shards.txt
  inflating: es-logs/elastic-search-cluster.log
 extracting: es-logs/elastic-search-cluster_deprecation.log
  inflating: __MACOSX/es-logs/._elastic-search-cluster_deprecation.log
  inflating: __MACOSX/._es-logs
$ find . -type f
./__MACOSX/._es-logs
./__MACOSX/es-logs/._before-sutdown-stats-shards.txt
./__MACOSX/es-logs/._elastic-search-cluster_deprecation.log
./es-logs.zip
./es-logs/before-sutdown-stats-shards.txt
./es-logs/elastic-search-cluster.log
./es-logs/elastic-search-cluster_deprecation.log
$ xxd -s 21989248 es-logs/elastic-search-cluster.log | head -n20
014f8780: 616c 616e 6365 6453 6861 7264 7341 6c6c  alancedShardsAll
014f8790: 6f63 6174 6f72 5d20 5b37 3065 524a 7245  ocator] [70eRJrE
014f87a0: 5d20 4173 7369 676e 6564 2073 6861 7264  ] Assigned shard
014f87b0: 205b 5b74 6573 6162 696e 6172 7973 7472   [[tesabinarystr
014f87c0: 6561 6d6f 626a 6563 7469 6e64 6578 5f76  eamobjectindex_v
014f87d0: 335d 5b32 5d2c 206e 6f64 655b 3773 4e45  3][2], node[7sNE
014f87e0: 5a73 624e 5355 2d43 787a 475f 3634 6e31  ZsbNSU-CxzG_64n1
014f87f0: 4351 5d2c 205b 505d 2c20 735b 5354 4152  CQ], [P], s[STAR
014f8800: 0000 0000 0000 0000 0000 0000 0000 0000  ................
014f8810: 0000 0000 0000 0000 0000 0000 0000 0000  ................
014f8820: 0000 0000 0000 0000 0000 0000 0000 0000  ................
014f8830: 0000 0000 0000 0000 0000 0000 0000 0000  ................
014f8840: 0000 0000 0000 0000 0000 0000 0000 0000  ................
014f8850: 0000 0000 0000 0000 0000 0000 0000 0000  ................
014f8860: 0000 0000 0000 0000 0000 0000 0000 0000  ................
014f8870: 0000 0000 0000 0000 0000 0000 0000 0000  ................
014f8880: 0000 0000 0000 0000 0000 0000 0000 0000  ................
014f8890: 0000 0000 0000 0000 0000 0000 0000 0000  ................
014f88a0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
014f88b0: 0000 0000 0000 0000 0000 0000 0000 0000  ................

I've downloaded this file twice to check it didn't break during download and had the same results both times. Please could you try zipping and uploading this information again?


(David Turner) #3

Please could you explain in more detail what happened in between these two outputs of GET _cat/allocation?v - it sounds from your text that all you did was to disable allocation, but I think that more occurred here.


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.