ElasticSearch Snapshot using HDFS

Hey team,

I was trying to take a snapshost of ES indices and store it in the HDFS files system, but when I give HDFS HA config, it is not able to initiate a config, whereas if we give Active NN hostname, it is working, and I was able to create a repository. Can someone check and tell if anything is wrong with the configuration?

{
"settings.conf.dfs.client.failover.proxy.provider.test": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"settings.conf.dfs.ha.automatic-failover.enabled.test": true,
"settings.conf.dfs.ha.namenodes.test": "nn1,nn2",
"settings.conf.dfs.namenode.rpc-address.test.nn1": "server1:8020",
"settings.conf.dfs.namenode.rpc-address.test.nn2": "server2:8020",
"settings.conf.dfs.nameservices": "test",
"settings.conf.fs.hdfs.impl": "org.apache.hadoop.hdfs.DistributedFileSystem",
"type": "hdfs"
}

ERROR:

rg.elasticsearch.repositories.RepositoryVerificationException: [test_rep] path is not accessible on master node
Caused by: org.elasticsearch.repositories.RepositoryException: [test_rep] cannot create blob store