I'm checking the status of recovery through _cat/recovery, but I only see the % of 1 shard at a time actually moving up and very slowly. It seems to me that it's recovering one shard at a time.
I have 4 nodes
Per node: 32 cores, ES_HEAP_SIZE = 30gb, and Sandisk Extre pro 960GB SSDS in RAID 0.
You can also move up the limit of max bytes per second and increase the number of concurrent streams in recovery process - so recovery will work faster
Cool I set up those settings. But do those settings take effect on the current recovery or do the nodes need to be restarted first and the next recovery will use the new settings?
Right now I still only see 1-2 shards % going up, but not more...
So those settings didn't seem to make a difference it took a whole day to recover.
1- I know on regular rolling restart, where we enable and disable cluster.routing.allocation. The shards come back almost right away. I guess because it is loading from local disk.
2- If I randomly just power off a node to simulate a "crash", this takes for ever. I only see about 50% network utilisation and the IOs on the disk don't seem to be utilized much and recovery slowly limps along until it's done (try 16 hours). Though I do know that if I grab one of the big index files and manually copy it from one node to the other. I.e: Grab it from data folder of ES and just copy it to TEMP folder on another node, I can push the network usage to 100%. A 5GB file takes about 20 seconds to copy
I have the same issue..
curl -s -XGET 'localhost:9200/_cat/recovery?v' - shows only 1 shard increasing in percentage.. at a time. and host has pcie SSD's and not a big IO load, når big cpu load (30 cores)..
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.