I could create and do snapshot /restore successfully with hdfs repo using kerbros authentication. However in the config If I use namenode setting as defined in hadoop hdfs-site.xml file:, it fails with error
Note that I am not using a port in the uri - just the nameservice.
Also note that the nameservice (ha-hdfs) is explicitly called out with conf.dfs.nameservices and the namenode ids are called out with conf.dfs.ha.namenodes.<nameservice>.
If you are using HA Namenodes, you should only be specifying the port numbers for the namenodes on the namenode addresses, not the URI. Only the nameservice goes in the URI.
Finally, you must also configure the client failover proxy (the last config entry) or else if your active namenode fails over, the repository will be unavailable until it comes back online. This proxy actually does the client level failover logic.
Edit: This post previously had incorrect configurations. The configurations should now be correct.
Followed suggested config, experimented with additing addional http/https addresses but it still failed with unknown host. Same config replacing URI with active name node and port worked fine. It appeared it is still trying to resolve uri using DNS rather than name node service. Did I misconfigured any config parameter ?
[2018-10-26T14:30:05,130][WARN ][r.suppressed ] path: /_snapshot/netsectest_hdfs_repository, params: {repository=netsectest_hdfs_repository}
org.elasticsearch.transport.RemoteTransportException: [nets_m02][10.236.233.168:9300][cluster:admin/repository/put]
Caused by: org.elasticsearch.repositories.RepositoryException: [netsectest_hdfs_repository] cannot create blob store
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.blobStore(BlobStoreRepository.java:336) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:635) ~[elasticsearch-6.4.2.jar:6.4.2]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
Caused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: runtime_exception: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:136) ~[?:?]
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:165) ~[?:?]
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181]
Caused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: invocation_target_exception: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:?]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181]
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: hadoop.log.labs
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:418) ~[?:?]
at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:130) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:343) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181]
Caused by: java.io.IOException: hadoop.log.labs
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:418) ~[?:?]
at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:130) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:343) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:287) ~[?:?]
at org.apache.hadoop.fs.Hdfs.<init>(Hdfs.java:91) ~[?:?]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:?]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_181]
at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:134) ~[?:?]
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:165) ~[?:?]
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsRepository.lambda$createBlobstore$0(HdfsRepository.java:130) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]
at javax.security.auth.Subject.doAs(Subject.java:360) ~[?:1.8.0_181]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1787) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsRepository.createBlobstore(HdfsRepository.java:128) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsRepository.lambda$createBlobStore$1(HdfsRepository.java:228) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]
at org.elasticsearch.repositories.hdfs.HdfsRepository.createBlobStore(HdfsRepository.java:227) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsRepository.createBlobStore(HdfsRepository.java:53) ~[?:?]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.blobStore(BlobStoreRepository.java:332) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:635) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.repositories.RepositoriesService.lambda$verifyRepository$2(RepositoriesService.java:218) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:624) ~[elasticsearch-6.4.2.jar:6.4.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181]
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.