I tried to use the opensource hdfs as a snapshot repository, and when I excute the analysis API, there is an error below:
POST /_snapshot/06031016/_analyze?blob_count=10&max_blob_size=1mb&timeout=120s&concurrency=1
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[06031016] analysis failed, you may need to manually remove [temp-analysis-9IdElIn4TY-RSNjav85cCg]"
}
],
"type" : "repository_verification_exception",
"reason" : "[06031016] analysis failed, you may need to manually remove [temp-analysis-9IdElIn4TY-RSNjav85cCg]",
"caused_by" : {
"type" : "file_not_found_exception",
"reason" : "File hdfs://hadoop1:8020/media/test/0603/temp-analysis-9IdElIn4TY-RSNjav85cCg does not exist."
}
},
"status" : 500
}
the error log is as below:
Caused by: java.io.FileNotFoundException: File hdfs://hadoop1:8020/media/test/0603/temp-analysis-pjiMKr4wRW6sxQUqUXtBfg does not exist.
at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:267) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1806) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1802) ~[?:?]
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1802) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1767) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1726) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.lambda$listBlobsByPrefix$9(HdfsBlobContainer.java:194) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobStore.lambda$execute$2(HdfsBlobStore.java:103) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_171]
at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_171]
at org.elasticsearch.repositories.hdfs.HdfsSecurityContext.doPrivilegedOrThrow(HdfsSecurityContext.java:131) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:101) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:194) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobs(HdfsBlobContainer.java:207) ~[?:?]
at org.elasticsearch.repositories.blobstore.testkit.RepositoryAnalyzeAction$AsyncAction.deleteContainer(RepositoryAnalyzeAction.java:591) ~[?:?]
at org.elasticsearch.repositories.blobstore.testkit.RepositoryAnalyzeAction$AsyncAction.lambda$onWorkerCompletion$2(RepositoryAnalyzeAction.java:540) ~[?:?]
at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:62) [elasticsearch-7.13.1.jar:7.13.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:732) [elasticsearch-7.13.1.jar:7.13.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.13.1.jar:7.13.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
The hadoop authentication mode is simple . and use hdfs 2.6 as the repository.
Thanks!
hope you reponse~ :slightly_smiling_face:
This appears to be a bug, we should be treating a missing directory as if it were empty but it seems that we don't do so in HDFS. Thanks for reporting it on Github too, I'm linking to it here to avoid duplicate discussions & effort:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.