I'm currently migrating from a 1.7.x cluster to 6.3.2 and ran into this exact same problem. No matter what i set the indices.recovery.max_bytes_per_sec to, the max throughput is about 11 MB/s. This is a large cluster, with 10Gb ethernet, flash storage, plenty of RAM, CPU, etc so I was equally stumped. Happens I noticed that our previous elasticsearch.yml file had the transport.tcp.compress set to true. I didn't really pay attention to this but i should have. Changing this to false (to disable compression) suddenly results in the bandwidth and recovery performance in line with the indices.recovery.max_bytes_per_sec setting. Is it possible your nodes currently have this set?