So I have found out that actually, there was some erroneous metric coming through the dashboard and there was indeed 40Mb/s being utilised for the recovery.
However I had changed this setting to be 250Mb and believe there is plenty of bandwidth to utilise such speeds. This seems to be capping at the indices.recovery.max_bytes_per_sec=40Mb default value, even though my settings are the following:
{
"persistent": {
"cluster": {
"routing": {
"allocation": {
"node_concurrent_recoveries": "10",
"node_initial_primaries_recoveries": "20"
}
}
},
"indices": {
"recovery": {
"max_bytes_per_sec": "250mb",
"max_concurrent_file_chunks": "5"
}
},
"xpack": {
"monitoring": {
"collection": {
"enabled": "true"
}
}
}
},
"transient": {}
}
It's like the new settings have not been applied.
I have read that the compression significantly affects network usage and throughput speed: Elasticsearch 6.3.0 shard recovery is slow .
Furthermore, I have also noticed that there was a setting called, indices.recovery.compress which could also be affecting the network speed : Transport.tcp.compress slowing down shard relocation
However, I can not find this setting anymore on version 6.8. Have these been deprecated or replaced with some other parameter?
How would I go about disabling compression in elastic 6.8?
Thank you in advance