How to improve recovery speed?

Using 1.5.2

I have 16 shards initializing.

I'm checking the status of recovery through _cat/recovery, but I only see the % of 1 shard at a time actually moving up and very slowly. It seems to me that it's recovering one shard at a time.

I have 4 nodes
Per node: 32 cores, ES_HEAP_SIZE = 30gb, and Sandisk Extre pro 960GB SSDS in RAID 0.

My settings are...

   "persistent": {},
   "transient": {
      "cluster": {
         "routing": {
            "allocation": {
               "cluster_concurrent_rebalance": "4",
               "node_concurrent_recoveries": "4",
               "enable": "all"
      "threadpool": {
         "bulk": {
            "size": "56",
            "queue_size": "56"
         "search": {
            "size": "100"
      "indices": {
         "store": {
            "throttle": {
               "max_bytes_per_sec": "200mb"
         "recovery": {
            "translog_size": "512kb",
            "translog_ops": "1000",
            "max_bytes_per_sec": "40mb",
            "file_chunk_size": "512kb"

Here are my HD stats:

What can I tweak?

Something like this:

curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"cluster.routing.allocation.node_concurrent_recoveries" : "5"

You can also move up the limit of max bytes per second and increase the number of concurrent streams in recovery process - so recovery will work faster

curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"indices.recovery.max_bytes_per_sec": "200mb",
"indices.recovery.concurrent_streams": 5


Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support *

1 Like

Cool I set up those settings. But do those settings take effect on the current recovery or do the nodes need to be restarted first and the next recovery will use the new settings?

Right now I still only see 1-2 shards % going up, but not more...

So those settings didn't seem to make a difference it took a whole day to recover.

1- I know on regular rolling restart, where we enable and disable cluster.routing.allocation. The shards come back almost right away. I guess because it is loading from local disk.

2- If I randomly just power off a node to simulate a "crash", this takes for ever. I only see about 50% network utilisation and the IOs on the disk don't seem to be utilized much and recovery slowly limps along until it's done (try 16 hours). Though I do know that if I grab one of the big index files and manually copy it from one node to the other. I.e: Grab it from data folder of ES and just copy it to TEMP folder on another node, I can push the network usage to 100%. A 5GB file takes about 20 seconds to copy

Any other thoughts?

I have the same issue.. :frowning:
curl -s -XGET 'localhost:9200/_cat/recovery?v' - shows only 1 shard increasing in percentage.. at a time. and host has pcie SSD's and not a big IO load, når big cpu load (30 cores)..