Why "conf.dfs.replication" : "1" is not SUCCESS?

I set snapshot use hdfs with conf conf.dfs.replication = 1 ,but I found the replication of data is also 3,i need to hdfs dfs -setrep -R -w 1 /es_snapshots

GET _snapshot/es_snapshots
{
  "es_snapshots" : {
    "type" : "hdfs",
    "uuid" : "ixsTxZ2EScWQYVprkzdCyQ",
    "settings" : {
      "path" : "/es_snapshots",
      "conf" : {
        "dfs" : {
          "replication" : "1"
        }
      },
      "compress" : "true",
      "uri" : "hdfs://10.0.100.7:8020"
    }
  }
}
hdfs  dfs -du -s -h  /es_snapshots/indices/*
1.3 G  3.9 G  /es_snapshots/indices/IlkxzKdZQRKIpSNHVkIeFA

everyone can help me?

I don't know but may be you need to use a number instead of "1"?

thank you ,but the "" is elasticsearch auto add

I have no idea.

Maybe look at the logs and turn on the debug level so we can see what is happening when registering the repository?

Does the repository in hdfs exist when you create it in elasticsearch?

[2022-10-17T15:26:29,496][DEBUG][o.e.r.h.HdfsRepository   ] [bs-dp-aidebugger-dev-001] Adding configuration to HDFS Client Configuration : dfs.replication = 1
[2022-10-17T15:26:29,557][DEBUG][o.a.h.s.Groups           ] [bs-dp-aidebugger-dev-001]  Creating new Groups object
[2022-10-17T15:26:29,560][DEBUG][o.a.h.u.NativeCodeLoader ] [bs-dp-aidebugger-dev-001] Trying to load the custom-built native-hadoop library...
[2022-10-17T15:26:29,560][DEBUG][o.a.h.u.NativeCodeLoader ] [bs-dp-aidebugger-dev-001] Failed to load native-hadoop with error: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "loadLibrary.hadoop")
[2022-10-17T15:26:29,560][DEBUG][o.a.h.u.NativeCodeLoader ] [bs-dp-aidebugger-dev-001] java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
[2022-10-17T15:26:29,561][WARN ][o.a.h.u.NativeCodeLoader ] [bs-dp-aidebugger-dev-001] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[2022-10-17T15:26:29,562][DEBUG][o.a.h.u.PerformanceAdvisory] [bs-dp-aidebugger-dev-001] Falling back to shell based
[2022-10-17T15:26:29,564][DEBUG][o.a.h.s.JniBasedUnixGroupsMappingWithFallback] [bs-dp-aidebugger-dev-001] Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
[2022-10-17T15:26:29,672][DEBUG][o.a.h.s.Groups           ] [bs-dp-aidebugger-dev-001] Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
[2022-10-17T15:26:29,672][DEBUG][o.e.r.h.HdfsRepository   ] [bs-dp-aidebugger-dev-001] Hadoop security enabled: [false]
[2022-10-17T15:26:29,672][DEBUG][o.e.r.h.HdfsRepository   ] [bs-dp-aidebugger-dev-001] Using Hadoop authentication method: [SIMPLE]
[2022-10-17T15:26:29,724][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] Hadoop login
[2022-10-17T15:26:29,725][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] hadoop login commit
[2022-10-17T15:26:29,727][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] Using local user: UnixPrincipal: elasticsearch
[2022-10-17T15:26:29,727][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] Using user: "UnixPrincipal: elasticsearch" with name: elasticsearch
[2022-10-17T15:26:29,727][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] User entry: "elasticsearch"
[2022-10-17T15:26:29,728][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] UGI loginUser: elasticsearch (auth:SIMPLE)
[2022-10-17T15:26:29,742][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] PrivilegedAction [as: elasticsearch (auth:SIMPLE)][action: org.elasticsearch.repositories.hdfs.HdfsRepository$$Lambda$7500/0x0000000801d45150@40ba14a5]
java.lang.Exception: null
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1852) [hadoop-client-api-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.hdfs.HdfsRepository.createBlobstore(HdfsRepository.java:136) [repository-hdfs-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.hdfs.HdfsRepository.lambda$createBlobStore$1(HdfsRepository.java:247) [repository-hdfs-7.17.5.jar:7.17.5]
        at java.security.AccessController.doPrivileged(AccessController.java:318) [?:?]
        at org.elasticsearch.repositories.hdfs.HdfsRepository.createBlobStore(HdfsRepository.java:246) [repository-hdfs-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.hdfs.HdfsRepository.createBlobStore(HdfsRepository.java:44) [repository-hdfs-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.blobStore(BlobStoreRepository.java:746) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.verify(BlobStoreRepository.java:3217) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.VerifyNodeRepositoryAction.doVerify(VerifyNodeRepositoryAction.java:130) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.VerifyNodeRepositoryAction.access$400(VerifyNodeRepositoryAction.java:37) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.VerifyNodeRepositoryAction$VerifyNodeRepositoryRequestHandler.messageReceived(VerifyNodeRepositoryAction.java:162) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.VerifyNodeRepositoryAction$VerifyNodeRepositoryRequestHandler.messageReceived(VerifyNodeRepositoryAction.java:157) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:341) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:404) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:394) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.authz.AuthorizationService.authorizeSystemUser(AuthorizationService.java:620) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.authz.AuthorizationService.authorize(AuthorizationService.java:250) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile.lambda$inbound$1(ServerTransportFilter.java:136) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:136) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:101) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.authc.AuthenticatorChain.authenticateAsync(AuthenticatorChain.java:102) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:199) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile.inbound(ServerTransportFilter.java:128) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:415) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:67) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:260) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.5.jar:7.17.5]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
        at java.lang.Thread.run(Thread.java:833) [?:?]
[2022-10-17T15:26:29,766][DEBUG][o.e.x.s.a.e.ReservedRealm] [bs-dp-aidebugger-dev-001] realm [reserved] authenticated user [elastic], with roles [[superuser]] (cached)
[2022-10-17T15:26:29,766][DEBUG][o.e.x.s.a.RealmsAuthenticator] [bs-dp-aidebugger-dev-001] Authentication of [elastic] using realm [reserved/reserved] with token [UsernamePasswordToken] was [AuthenticationResult{status=SUCCESS, user=User[username=elastic,roles=[superuser],fullName=null,email=null,metadata={_reserved=true}], message=null, exception=null}]
[2022-10-17T15:26:29,924][DEBUG][o.a.h.c.Tracer           ] [bs-dp-aidebugger-dev-001] sampler.classes = ; loaded no samplers
[2022-10-17T15:26:29,931][DEBUG][o.a.h.c.Tracer           ] [bs-dp-aidebugger-dev-001] span.receiver.classes = ; loaded no span receivers
[2022-10-17T15:26:29,959][DEBUG][o.a.h.h.c.i.DfsClientConf] [bs-dp-aidebugger-dev-001] dfs.client.use.legacy.blockreader.local = false
[2022-10-17T15:26:29,960][DEBUG][o.a.h.h.c.i.DfsClientConf] [bs-dp-aidebugger-dev-001] dfs.client.read.shortcircuit = false
[2022-10-17T15:26:29,960][DEBUG][o.a.h.h.c.i.DfsClientConf] [bs-dp-aidebugger-dev-001] dfs.client.domain.socket.data.traffic = false
[2022-10-17T15:26:29,961][DEBUG][o.a.h.h.c.i.DfsClientConf] [bs-dp-aidebugger-dev-001] dfs.domain.socket.path = 
[2022-10-17T15:26:29,981][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
[2022-10-17T15:26:30,008][DEBUG][o.a.h.i.r.RetryUtils     ] [bs-dp-aidebugger-dev-001] multipleLinearRandomRetry = null
[2022-10-17T15:26:30,034][DEBUG][o.a.h.i.Server           ] [bs-dp-aidebugger-dev-001] rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3126f543
[2022-10-17T15:26:30,042][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] getting client out of cache: Client-83f5c34ed6f247bab5bfa3dfcba5ccb0
[2022-10-17T15:26:30,252][DEBUG][o.e.x.s.a.e.ReservedRealm] [bs-dp-aidebugger-dev-001] realm [reserved] authenticated user [elastic], with roles [[superuser]] (cached)
[2022-10-17T15:26:30,253][DEBUG][o.e.x.s.a.RealmsAuthenticator] [bs-dp-aidebugger-dev-001] Authentication of [elastic] using realm [reserved/reserved] with token [UsernamePasswordToken] was [AuthenticationResult{status=SUCCESS, user=User[username=elastic,roles=[superuser],fullName=null,email=null,metadata={_reserved=true}], message=null, exception=null}]
[2022-10-17T15:26:30,254][DEBUG][o.e.r.TransportUpdateByQueryAction] [bs-dp-aidebugger-dev-001] [11058]: starting
[2022-10-17T15:26:30,254][DEBUG][o.e.r.TransportUpdateByQueryAction] [bs-dp-aidebugger-dev-001] executing initial scroll against [.kibana_task_manager]
[2022-10-17T15:26:30,282][DEBUG][o.e.r.TransportUpdateByQueryAction] [bs-dp-aidebugger-dev-001] [11058]: got scroll response with [0] hits
[2022-10-17T15:26:30,283][DEBUG][o.e.i.r.WorkerBulkByScrollTaskState] [bs-dp-aidebugger-dev-001] [11058]: preparing bulk request for [0s]
[2022-10-17T15:26:30,283][DEBUG][o.e.r.TransportUpdateByQueryAction] [bs-dp-aidebugger-dev-001] [11058]: preparing bulk request
[2022-10-17T15:26:30,283][DEBUG][o.e.r.TransportUpdateByQueryAction] [bs-dp-aidebugger-dev-001] [11058]: finishing without any catastrophic failures
[2022-10-17T15:26:30,284][DEBUG][o.e.r.TransportUpdateByQueryAction] [bs-dp-aidebugger-dev-001] Freed [1] contexts
[2022-10-17T15:26:30,294][DEBUG][o.e.x.s.a.e.ReservedRealm] [bs-dp-aidebugger-dev-001] realm [reserved] authenticated user [elastic], with roles [[superuser]] (cached)
[2022-10-17T15:26:30,295][DEBUG][o.e.x.s.a.RealmsAuthenticator] [bs-dp-aidebugger-dev-001] Authentication of [elastic] using realm [reserved/reserved] with token [UsernamePasswordToken] was [AuthenticationResult{status=SUCCESS, user=User[username=elastic,roles=[superuser],fullName=null,email=null,metadata={_reserved=true}], message=null, exception=null}]
[2022-10-17T15:26:30,296][DEBUG][o.e.r.TransportUpdateByQueryAction] [bs-dp-aidebugger-dev-001] [11062]: starting
[2022-10-17T15:26:30,296][DEBUG][o.e.r.TransportUpdateByQueryAction] [bs-dp-aidebugger-dev-001] executing initial scroll against [.kibana_task_manager]
[2022-10-17T15:26:30,329][DEBUG][o.e.r.TransportUpdateByQueryAction] [bs-dp-aidebugger-dev-001] [11062]: got scroll response with [0] hits
[2022-10-17T15:26:30,329][DEBUG][o.e.i.r.WorkerBulkByScrollTaskState] [bs-dp-aidebugger-dev-001] [11062]: preparing bulk request for [0s]
[2022-10-17T15:26:30,330][DEBUG][o.e.r.TransportUpdateByQueryAction] [bs-dp-aidebugger-dev-001] [11062]: preparing bulk request
[2022-10-17T15:26:30,330][DEBUG][o.e.r.TransportUpdateByQueryAction] [bs-dp-aidebugger-dev-001] [11062]: finishing without any catastrophic failures
[2022-10-17T15:26:30,340][DEBUG][o.e.r.TransportUpdateByQueryAction] [bs-dp-aidebugger-dev-001] Freed [1] contexts
[2022-10-17T15:26:30,554][DEBUG][o.a.h.u.PerformanceAdvisory] [bs-dp-aidebugger-dev-001] Both short-circuit local reads and UNIX domain socket are disabled.
[2022-10-17T15:26:30,565][DEBUG][o.a.h.h.p.d.s.DataTransferSaslUtil] [bs-dp-aidebugger-dev-001] DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
[2022-10-17T15:26:30,571][DEBUG][o.e.r.h.HdfsRepository   ] [bs-dp-aidebugger-dev-001] Using file-system [org.apache.hadoop.fs.Hdfs@fc403ee8] for URI [hdfs://10.0.100.7:8020], path [/test20221017]
[2022-10-17T15:26:30,576][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] /test20221017: masked={ masked: rwxr-xr-x, unmasked: rwxrwxrwx }
[2022-10-17T15:26:30,649][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] The ping interval is 60000 ms.
[2022-10-17T15:26:30,663][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] Connecting to /10.0.100.7:8020
[2022-10-17T15:26:30,663][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] Setup connection to /10.0.100.7:8020
[2022-10-17T15:26:30,701][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch: starting, having connections 1
[2022-10-17T15:26:30,708][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs
[2022-10-17T15:26:30,713][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch got value #0
[2022-10-17T15:26:30,717][DEBUG][o.a.h.i.ProtobufRpcEngine2] [bs-dp-aidebugger-dev-001] Call: mkdirs took 105ms
[2022-10-17T15:26:30,732][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] /test20221017/tests-Lbuz2P_xTBypXhiby5MWsw: masked={ masked: rwxr-xr-x, unmasked: rwxrwxrwx }
[2022-10-17T15:26:30,744][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch sending #1 org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs
[2022-10-17T15:26:30,748][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch got value #1
[2022-10-17T15:26:30,751][DEBUG][o.a.h.i.ProtobufRpcEngine2] [bs-dp-aidebugger-dev-001] Call: mkdirs took 19ms
[2022-10-17T15:26:30,754][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch sending #2 org.apache.hadoop.hdfs.protocol.ClientProtocol.getServerDefaults
[2022-10-17T15:26:30,762][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch got value #2
[2022-10-17T15:26:30,765][DEBUG][o.a.h.i.ProtobufRpcEngine2] [bs-dp-aidebugger-dev-001] Call: getServerDefaults took 11ms
[2022-10-17T15:26:30,814][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch sending #3 org.apache.hadoop.hdfs.protocol.ClientProtocol.create
[2022-10-17T15:26:30,824][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch got value #3
[2022-10-17T15:26:30,825][DEBUG][o.a.h.i.ProtobufRpcEngine2] [bs-dp-aidebugger-dev-001] Call: create took 12ms
[2022-10-17T15:26:30,862][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] computePacketChunkSize: src=/test20221017/tests-Lbuz2P_xTBypXhiby5MWsw/data-uKRo8rUxSjSDvTcOwUFF_w.dat, chunkSize=516, chunksPerPacket=126, packetSize=65016
[2022-10-17T15:26:30,891][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] WriteChunk allocating new packet seqno=0, src=/test20221017/tests-Lbuz2P_xTBypXhiby5MWsw/data-uKRo8rUxSjSDvTcOwUFF_w.dat, packetSize=65016, chunksPerPacket=126, bytesCurBlock=0, DFSOutputStream:block==null
[2022-10-17T15:26:30,891][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] Queued packet seqno: 0 offsetInBlock: 0 lastPacketInBlock: false lastByteOffsetInBlock: 22, block==null
[2022-10-17T15:26:30,891][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] Queued packet seqno: 1 offsetInBlock: 22 lastPacketInBlock: true lastByteOffsetInBlock: 22, block==null
[2022-10-17T15:26:30,892][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] block==null waiting for ack for: 1
[2022-10-17T15:26:30,903][DEBUG][o.a.h.h.c.i.LeaseRenewer ] [bs-dp-aidebugger-dev-001] Lease renewer daemon for [DFSClient_NONMAPREDUCE_-1487133476_99] with renew id 1 started
[2022-10-17T15:26:30,904][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] stage=PIPELINE_SETUP_CREATE, block==null
[2022-10-17T15:26:30,904][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] Allocating new block: block==null
[2022-10-17T15:26:30,921][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch sending #4 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock
[2022-10-17T15:26:30,932][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch got value #4
[2022-10-17T15:26:30,932][DEBUG][o.a.h.i.ProtobufRpcEngine2] [bs-dp-aidebugger-dev-001] Call: addBlock took 11ms
[2022-10-17T15:26:30,963][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] pipeline = [DatanodeInfoWithStorage[10.0.100.9:9866,DS-931adf5b-56cb-4a43-a127-2711091b6a54,DISK], DatanodeInfoWithStorage[10.0.100.8:9866,DS-f24718ad-595e-414e-bad7-3db9c9d11c8c,DISK]], blk_1073746913_6096
[2022-10-17T15:26:30,967][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] Connecting to datanode 10.0.100.9:9866
[2022-10-17T15:26:30,968][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] Send buf size 256768
[2022-10-17T15:26:30,969][DEBUG][o.a.h.h.p.d.s.SaslDataTransferClient] [bs-dp-aidebugger-dev-001] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
[2022-10-17T15:26:30,969][DEBUG][o.a.h.h.p.d.s.SaslDataTransferClient] [bs-dp-aidebugger-dev-001] SASL client skipping handshake in unsecured configuration for addr = /10.0.100.9, datanodeId = DatanodeInfoWithStorage[10.0.100.9:9866,DS-931adf5b-56cb-4a43-a127-2711091b6a54,DISK]
[2022-10-17T15:26:31,043][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] nodes [DatanodeInfoWithStorage[10.0.100.9:9866,DS-931adf5b-56cb-4a43-a127-2711091b6a54,DISK], DatanodeInfoWithStorage[10.0.100.8:9866,DS-f24718ad-595e-414e-bad7-3db9c9d11c8c,DISK]] storageTypes [DISK, DISK] storageIDs [DS-931adf5b-56cb-4a43-a127-2711091b6a54, DS-f24718ad-595e-414e-bad7-3db9c9d11c8c]
[2022-10-17T15:26:31,045][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] blk_1073746913_6096 sending packet seqno: 0 offsetInBlock: 0 lastPacketInBlock: false lastByteOffsetInBlock: 22
[2022-10-17T15:26:31,045][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] stage=DATA_STREAMING, blk_1073746913_6096
[2022-10-17T15:26:31,110][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] DFSClient seqno: 0 reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 307670 flag: 0 flag: 0
[2022-10-17T15:26:31,116][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] blk_1073746913_6096 sending packet seqno: 1 offsetInBlock: 22 lastPacketInBlock: true lastByteOffsetInBlock: 22
[2022-10-17T15:26:31,121][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] DFSClient seqno: 1 reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 948404 flag: 0 flag: 0
[2022-10-17T15:26:31,121][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] Closing old block BP-668373763-10.0.100.7-1662471357966:blk_1073746913_6096
[2022-10-17T15:26:31,127][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch sending #5 org.apache.hadoop.hdfs.protocol.ClientProtocol.complete
[2022-10-17T15:26:31,128][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch got value #5
[2022-10-17T15:26:31,129][DEBUG][o.a.h.i.ProtobufRpcEngine2] [bs-dp-aidebugger-dev-001] Call: complete took 2ms
[2022-10-17T15:26:31,135][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch sending #6 org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations
[2022-10-17T15:26:31,136][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1670595564) connection to /10.0.100.7:8020 from elasticsearch got value #6
[2022-10-17T15:26:31,136][DEBUG][o.a.h.i.ProtobufRpcEngine2] [bs-dp-aidebugger-dev-001] Call: getBlockLocations took 2ms
[2022-10-17T15:26:31,145][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] newInfo = LocatedBlocks{;  fileLength=22;  underConstruction=false;  blocks=[LocatedBlock{BP-668373763-10.0.100.7-1662471357966:blk_1073746910_6093; getBlockSize()=22; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.0.100.7:9866,DS-047a3410-6a5f-4381-b4e1-e931df946ec3,DISK], DatanodeInfoWithStorage[10.0.100.8:9866,DS-f24718ad-595e-414e-bad7-3db9c9d11c8c,DISK], DatanodeInfoWithStorage[10.0.100.9:9866,DS-931adf5b-56cb-4a43-a127-2711091b6a54,DISK]]; cachedLocs=[]}];  lastLocatedBlock=LocatedBlock{BP-668373763-10.0.100.7-1662471357966:blk_1073746910_6093; getBlockSize()=22; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.0.100.7:9866,DS-047a3410-6a5f-4381-b4e1-e931df946ec3,DISK], DatanodeInfoWithStorage[10.0.100.8:9866,DS-f24718ad-595e-414e-bad7-3db9c9d11c8c,DISK], DatanodeInfoWithStorage[10.0.100.9:9866,DS-931adf5b-56cb-4a43-a127-2711091b6a54,DISK]]; cachedLocs=[]};  isLastBlockComplete=true;  ecPolicy=null}
[2022-10-17T15:26:31,148][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] Connecting to datanode 10.0.100.7:9866
[2022-10-17T15:26:31,164][DEBUG][o.a.h.h.p.d.s.SaslDataTransferClient] [bs-dp-aidebugger-dev-001] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
[2022-10-17T15:26:31,165][DEBUG][o.a.h.h.p.d.s.SaslDataTransferClient] [bs-dp-aidebugger-dev-001] SASL client skipping handshake in unsecured configuration for addr = /10.0.100.7, datanodeId = DatanodeInfoWithStorage[10.0.100.7:9866,DS-047a3410-6a5f-4381-b4e1-e931df946ec3,DISK]
[2022-10-17T15:26:31,177][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] DeadNode detection is not enabled or given block LocatedBlocks{;  fileLength=22;  underConstruction=false;  blocks=[LocatedBlock{BP-668373763-10.0.100.7-1662471357966:blk_1073746910_6093; getBlockSize()=22; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.0.100.7:9866,DS-047a3410-6a5f-4381-b4e1-e931df946ec3,DISK], DatanodeInfoWithStorage[10.0.100.8:9866,DS-f24718ad-595e-414e-bad7-3db9c9d11c8c,DISK], DatanodeInfoWithStorage[10.0.100.9:9866,DS-931adf5b-56cb-4a43-a127-2711091b6a54,DISK]]; cachedLocs=[]}];  lastLocatedBlock=LocatedBlock{BP-668373763-10.0.100.7-1662471357966:blk_1073746910_6093; getBlockSize()=22; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.0.100.7:9866,DS-047a3410-6a5f-4381-b4e1-e931df946ec3,DISK], DatanodeInfoWithStorage[10.0.100.8:9866,DS-f24718ad-595e-414e-bad7-3db9c9d11c8c,DISK], DatanodeInfoWithStorage[10.0.100.9:9866,DS-931adf5b-56cb-4a43-a127-2711091b6a54,DISK]]; cachedLocs=[]};  isLastBlockComplete=true;  ecPolicy=null} is null, skip to remove node.
[2022-10-17T15:26:31,178][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] DFSInputStream has been closed already
[2022-10-17T15:26:31,178][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] DeadNode detection is not enabled or given block LocatedBlocks{;  fileLength=22;  underConstruction=false;  blocks=[LocatedBlock{BP-668373763-10.0.100.7-1662471357966:blk_1073746910_6093; getBlockSize()=22; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.0.100.7:9866,DS-047a3410-6a5f-4381-b4e1-e931df946ec3,DISK], DatanodeInfoWithStorage[10.0.100.8:9866,DS-f24718ad-595e-414e-bad7-3db9c9d11c8c,DISK], DatanodeInfoWithStorage[10.0.100.9:9866,DS-931adf5b-56cb-4a43-a127-2711091b6a54,DISK]]; cachedLocs=[]}];  lastLocatedBlock=LocatedBlock{BP-668373763-10.0.100.7-1662471357966:blk_1073746910_6093; getBlockSize()=22; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.0.100.7:9866,DS-047a3410-6a5f-4381-b4e1-e931df946ec3,DISK], DatanodeInfoWithStorage[10.0.100.8:9866,DS-f24718ad-595e-414e-bad7-3db9c9d11c8c,DISK], DatanodeInfoWithStorage[10.0.100.9:9866,DS-931adf5b-56cb-4a43-a127-2711091b6a54,DISK]]; cachedLocs=[]};  isLastBlockComplete=true;  ecPolicy=null} is null, skip to remove node.

the log is for this command

PUT _snapshot/test
{

"type" : "hdfs",
"settings" : {
  "path" : "/test20221017",
  "conf" : {
    "dfs" : {
      "replication" : 1
    }
  },
  "compress" : "true",
  "uri" : "hdfs://10.0.100.7:8020"
}

}

and the /test20221017 in hdfs is not exitst before I creat the repository

hdfs fsck /test20221017
Connecting to namenode via http://node4:9870/fsck?ugi=elasticsearch&path=%2Ftest20221017
FSCK started by elasticsearch (auth:SIMPLE) from /10.0.100.7 for path /test20221017 at Mon Oct 17 15:43:07 CST 2022


Status: HEALTHY
 Number of data-nodes:  3
 Number of racks:               1
 Total dirs:                    1
 Total symlinks:                0

Replicated Blocks:
 Total size:    0 B
 Total files:   0
 Total blocks (validated):      0
 Minimally replicated blocks:   0
 Over-replicated blocks:        0
 Under-replicated blocks:       0
 Mis-replicated blocks:         0
 Default replication factor:    3
 Average block replication:     0.0
 Missing blocks:                0
 Corrupt blocks:                0
 Missing replicas:              0
 Blocks queued for replication: 0

Erasure Coded Block Groups:
 Total size:    0 B
 Total files:   0
 Total block groups (validated):        0
 Minimally erasure-coded block groups:  0
 Over-erasure-coded block groups:       0
 Under-erasure-coded block groups:      0
 Unsatisfactory placement block groups: 0
 Average block group size:      0.0
 Missing block groups:          0
 Corrupt block groups:          0
 Missing internal blocks:       0
 Blocks queued for replication: 0
FSCK ended at Mon Oct 17 15:43:07 CST 2022 in 2 milliseconds


The filesystem under path '/test20221017' is HEALTHY

Weird.

Because AFAICS the code seems to pass all the settings:

And the code on HDFS side is using that property...

Could you try with this instead?

"settings" : {
  "path" : "/test20221017",
  "conf" : {
    "dfs.replication" : 1
  },
  "compress" : "true",
  "uri" : "hdfs://10.0.100.7:8020"
}
[2022-10-17T16:10:18,518][DEBUG][o.e.c.s.ClusterApplierService] [bs-dp-aidebugger-dev-001] applying settings from cluster state with version 19252
[2022-10-17T16:10:18,518][DEBUG][o.e.c.s.ClusterApplierService] [bs-dp-aidebugger-dev-001] apply cluster state with version 19252
[2022-10-17T16:10:18,518][DEBUG][o.e.r.RepositoriesService] [bs-dp-aidebugger-dev-001] registering repository [hdfs_snapshots]
[2022-10-17T16:10:18,519][DEBUG][o.e.r.RepositoriesService] [bs-dp-aidebugger-dev-001] registering repository [es_snapshots]
[2022-10-17T16:10:18,519][DEBUG][o.e.r.RepositoriesService] [bs-dp-aidebugger-dev-001] registering repository [test]
[2022-10-17T16:10:18,519][DEBUG][o.e.r.RepositoriesService] [bs-dp-aidebugger-dev-001] creating repository [hdfs][test2]
[2022-10-17T16:10:18,519][DEBUG][o.e.r.RepositoriesService] [bs-dp-aidebugger-dev-001] registering repository [test2]
[2022-10-17T16:10:18,523][DEBUG][o.e.c.s.ClusterApplierService] [bs-dp-aidebugger-dev-001] set locally applied cluster state to version 19252
[2022-10-17T16:10:18,524][DEBUG][o.e.x.s.s.SecurityIndexManager] [bs-dp-aidebugger-dev-001] Index [.security-tokens] is not available - no metadata
[2022-10-17T16:10:18,524][DEBUG][o.e.l.LicenseService     ] [bs-dp-aidebugger-dev-001] previous [LicensesMetadata{license={"uid":"e23f291e-8408-4686-8597-eed75abe91d0","type":"basic","issue_date_in_millis":1662461546367,"max_nodes":1000,"max_resource_units":null,"issued_to":"elasticsearch","issuer":"elasticsearch","signature":"////+wAAAOAJxmSSOaCIWi76MSfpJNHlGB84pwSMYJJeFP9Hz4TbsgMtfTxHYso8FJDGwNttNeyD5j95mIfUcdlTJHVdk9NpPTFMKL5wC7BXn4CTS6kNo8VTrnyVrGcy5zD9RtBHWocBkZ7aT69LRo364rUcQ2hle15eFhAb9H2A5MS3hX0HG9uTxyEidFCyu4gWV4qrZJArZjXuTfNGchMXH4rAaQ92Dioh5G7gU3iO4FMzBifvZMQo2UvM2rjIEwVkAQaOSNYOMmnTj5uO9SnIFljQ2MhS3DyKz0zV+VTjkqdHe7zjHA==","start_date_in_millis":-1}, trialVersion=null}]
[2022-10-17T16:10:18,524][DEBUG][o.e.l.LicenseService     ] [bs-dp-aidebugger-dev-001] current [LicensesMetadata{license={"uid":"e23f291e-8408-4686-8597-eed75abe91d0","type":"basic","issue_date_in_millis":1662461546367,"max_nodes":1000,"max_resource_units":null,"issued_to":"elasticsearch","issuer":"elasticsearch","signature":"////+wAAAOAJxmSSOaCIWi76MSfpJNHlGB84pwSMYJJeFP9Hz4TbsgMtfTxHYso8FJDGwNttNeyD5j95mIfUcdlTJHVdk9NpPTFMKL5wC7BXn4CTS6kNo8VTrnyVrGcy5zD9RtBHWocBkZ7aT69LRo364rUcQ2hle15eFhAb9H2A5MS3hX0HG9uTxyEidFCyu4gWV4qrZJArZjXuTfNGchMXH4rAaQ92Dioh5G7gU3iO4FMzBifvZMQo2UvM2rjIEwVkAQaOSNYOMmnTj5uO9SnIFljQ2MhS3DyKz0zV+VTjkqdHe7zjHA==","start_date_in_millis":-1}, trialVersion=null}]
[2022-10-17T16:10:18,524][DEBUG][o.e.c.s.ClusterApplierService] [bs-dp-aidebugger-dev-001] processing [ApplyCommitRequest{term=23, version=19252, sourceNode={bs-dp-aidebugger-dev-003}{HXTIZZ5PQaG-cdImLF6eAA}{71dh6mcwR1e2PbL8v7Xy3g}{10.0.100.6}{10.0.100.6:9300}{cdfhilmrstw}{ml.machine_memory=33020411904, ml.max_open_jobs=512, xpack.installed=true, ml.max_jvm_size=17179869184, transform.node=true}}]: took [0s] done applying updated cluster state (version: 19252, uuid: uyWVeMgjQX6HN16UtWlvNg)
[2022-10-17T16:10:18,564][DEBUG][o.e.x.s.a.e.ReservedRealm] [bs-dp-aidebugger-dev-001] realm [reserved] authenticated user [elastic], with roles [[superuser]] (cached)
[2022-10-17T16:10:18,564][DEBUG][o.e.x.s.a.RealmsAuthenticator] [bs-dp-aidebugger-dev-001] Authentication of [elastic] using realm [reserved/reserved] with token [UsernamePasswordToken] was [AuthenticationResult{status=SUCCESS, user=User[username=elastic,roles=[superuser],fullName=null,email=null,metadata={_reserved=true}], message=null, exception=null}]
[2022-10-17T16:10:18,574][DEBUG][o.e.r.h.HdfsRepository   ] [bs-dp-aidebugger-dev-001] Adding configuration to HDFS Client Configuration : dfs.replication = 1
[2022-10-17T16:10:18,596][DEBUG][o.a.h.s.Groups           ] [bs-dp-aidebugger-dev-001]  Creating new Groups object
[2022-10-17T16:10:18,599][DEBUG][o.a.h.u.NativeCodeLoader ] [bs-dp-aidebugger-dev-001] Trying to load the custom-built native-hadoop library...
[2022-10-17T16:10:18,600][DEBUG][o.a.h.u.NativeCodeLoader ] [bs-dp-aidebugger-dev-001] Failed to load native-hadoop with error: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "loadLibrary.hadoop")
[2022-10-17T16:10:18,600][DEBUG][o.a.h.u.NativeCodeLoader ] [bs-dp-aidebugger-dev-001] java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
[2022-10-17T16:10:18,600][WARN ][o.a.h.u.NativeCodeLoader ] [bs-dp-aidebugger-dev-001] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[2022-10-17T16:10:18,601][DEBUG][o.a.h.u.PerformanceAdvisory] [bs-dp-aidebugger-dev-001] Falling back to shell based
[2022-10-17T16:10:18,603][DEBUG][o.a.h.s.JniBasedUnixGroupsMappingWithFallback] [bs-dp-aidebugger-dev-001] Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
[2022-10-17T16:10:18,615][DEBUG][o.e.x.s.a.e.ReservedRealm] [bs-dp-aidebugger-dev-001] realm [reserved] authenticated user [elastic], with roles [[superuser]] (cached)
[2022-10-17T16:10:18,615][DEBUG][o.e.x.s.a.RealmsAuthenticator] [bs-dp-aidebugger-dev-001] Authentication of [elastic] using realm [reserved/reserved] with token [UsernamePasswordToken] was [AuthenticationResult{status=SUCCESS, user=User[username=elastic,roles=[superuser],fullName=null,email=null,metadata={_reserved=true}], message=null, exception=null}]
[2022-10-17T16:10:18,698][DEBUG][o.a.h.s.Groups           ] [bs-dp-aidebugger-dev-001] Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
[2022-10-17T16:10:18,699][DEBUG][o.e.r.h.HdfsRepository   ] [bs-dp-aidebugger-dev-001] Hadoop security enabled: [false]
[2022-10-17T16:10:18,699][DEBUG][o.e.r.h.HdfsRepository   ] [bs-dp-aidebugger-dev-001] Using Hadoop authentication method: [SIMPLE]
[2022-10-17T16:10:18,726][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] Hadoop login
[2022-10-17T16:10:18,727][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] hadoop login commit
[2022-10-17T16:10:18,729][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] Using local user: UnixPrincipal: elasticsearch
[2022-10-17T16:10:18,729][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] Using user: "UnixPrincipal: elasticsearch" with name: elasticsearch
[2022-10-17T16:10:18,729][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] User entry: "elasticsearch"
[2022-10-17T16:10:18,730][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] UGI loginUser: elasticsearch (auth:SIMPLE)
[2022-10-17T16:10:18,734][DEBUG][o.a.h.s.UserGroupInformation] [bs-dp-aidebugger-dev-001] PrivilegedAction [as: elasticsearch (auth:SIMPLE)][action: org.elasticsearch.repositories.hdfs.HdfsRepository$$Lambda$7182/0x0000000801cd4ae0@251f94fa]
java.lang.Exception: null
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1852) [hadoop-client-api-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.hdfs.HdfsRepository.createBlobstore(HdfsRepository.java:136) [repository-hdfs-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.hdfs.HdfsRepository.lambda$createBlobStore$1(HdfsRepository.java:247) [repository-hdfs-7.17.5.jar:7.17.5]
        at java.security.AccessController.doPrivileged(AccessController.java:318) [?:?]
        at org.elasticsearch.repositories.hdfs.HdfsRepository.createBlobStore(HdfsRepository.java:246) [repository-hdfs-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.hdfs.HdfsRepository.createBlobStore(HdfsRepository.java:44) [repository-hdfs-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.blobStore(BlobStoreRepository.java:746) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.verify(BlobStoreRepository.java:3217) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.VerifyNodeRepositoryAction.doVerify(VerifyNodeRepositoryAction.java:130) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.VerifyNodeRepositoryAction.access$400(VerifyNodeRepositoryAction.java:37) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.VerifyNodeRepositoryAction$VerifyNodeRepositoryRequestHandler.messageReceived(VerifyNodeRepositoryAction.java:162) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.repositories.VerifyNodeRepositoryAction$VerifyNodeRepositoryRequestHandler.messageReceived(VerifyNodeRepositoryAction.java:157) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:341) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:404) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:394) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.authz.AuthorizationService.authorizeSystemUser(AuthorizationService.java:620) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.authz.AuthorizationService.authorize(AuthorizationService.java:250) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile.lambda$inbound$1(ServerTransportFilter.java:136) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:136) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:101) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.authc.AuthenticatorChain.authenticateAsync(AuthenticatorChain.java:102) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:199) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile.inbound(ServerTransportFilter.java:128) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:415) [x-pack-security-7.17.5.jar:7.17.5]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:67) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:260) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777) [elasticsearch-7.17.5.jar:7.17.5]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.5.jar:7.17.5]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
        at java.lang.Thread.run(Thread.java:833) [?:?]
[2022-10-17T16:10:18,849][DEBUG][o.a.h.c.Tracer           ] [bs-dp-aidebugger-dev-001] sampler.classes = ; loaded no samplers
[2022-10-17T16:10:18,852][DEBUG][o.a.h.c.Tracer           ] [bs-dp-aidebugger-dev-001] span.receiver.classes = ; loaded no span receivers
[2022-10-17T16:10:18,871][DEBUG][o.a.h.h.c.i.DfsClientConf] [bs-dp-aidebugger-dev-001] dfs.client.use.legacy.blockreader.local = false
[2022-10-17T16:10:18,872][DEBUG][o.a.h.h.c.i.DfsClientConf] [bs-dp-aidebugger-dev-001] dfs.client.read.shortcircuit = false
[2022-10-17T16:10:18,872][DEBUG][o.a.h.h.c.i.DfsClientConf] [bs-dp-aidebugger-dev-001] dfs.client.domain.socket.data.traffic = false
[2022-10-17T16:10:18,872][DEBUG][o.a.h.h.c.i.DfsClientConf] [bs-dp-aidebugger-dev-001] dfs.domain.socket.path = 
[2022-10-17T16:10:18,892][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
[2022-10-17T16:10:18,908][DEBUG][o.a.h.i.r.RetryUtils     ] [bs-dp-aidebugger-dev-001] multipleLinearRandomRetry = null
[2022-10-17T16:10:18,926][DEBUG][o.a.h.i.Server           ] [bs-dp-aidebugger-dev-001] rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@370d0fed
[2022-10-17T16:10:18,931][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] getting client out of cache: Client-71121bfa14e449a0bf4394f373da5f10
[2022-10-17T16:10:19,217][DEBUG][o.e.i.s.IndexShard       ] [bs-dp-aidebugger-dev-001] [elasticsearch-forest-snapshot_failure_node-2022-10-17-000029][1] shard is now inactive
[2022-10-17T16:10:19,217][DEBUG][o.e.i.f.SyncedFlushService] [bs-dp-aidebugger-dev-001] flushing shard [elasticsearch-forest-snapshot_failure_node-2022-10-17-000029][1], node[uKRo8rUxSjSDvTcOwUFF_w], [R], s[STARTED], a[id=kDnIATAkRrK4DmAB2yqL0g] on inactive
[2022-10-17T16:10:19,217][DEBUG][o.e.i.s.IndexShard       ] [bs-dp-aidebugger-dev-001] [elasticsearch-forest-snapshot_failure_node-2022-10-17-000029][2] shard is now inactive
[2022-10-17T16:10:19,217][DEBUG][o.e.i.f.SyncedFlushService] [bs-dp-aidebugger-dev-001] flushing shard [elasticsearch-forest-snapshot_failure_node-2022-10-17-000029][2], node[uKRo8rUxSjSDvTcOwUFF_w], [R], s[STARTED], a[id=DVLtvLVwR0eO966St9k5pg] on inactive
[2022-10-17T16:10:19,218][DEBUG][o.e.i.s.IndexShard       ] [bs-dp-aidebugger-dev-001] [elasticsearch-forest-snapshot_event_statistics-2022-10-17-000029][2] shard is now inactive
[2022-10-17T16:10:19,218][DEBUG][o.e.i.f.SyncedFlushService] [bs-dp-aidebugger-dev-001] flushing shard [elasticsearch-forest-snapshot_event_statistics-2022-10-17-000029][2], node[uKRo8rUxSjSDvTcOwUFF_w], [R], s[STARTED], a[id=uQt9Q5GlQIOyb_PuZd3TpA] on inactive
[2022-10-17T16:10:19,218][DEBUG][o.e.i.s.IndexShard       ] [bs-dp-aidebugger-dev-001] [elasticsearch-forest-event_type_instance-2022-10-17-000029][2] shard is now inactive
[2022-10-17T16:10:19,218][DEBUG][o.e.i.f.SyncedFlushService] [bs-dp-aidebugger-dev-001] flushing shard [elasticsearch-forest-event_type_instance-2022-10-17-000029][2], node[uKRo8rUxSjSDvTcOwUFF_w], [R], s[STARTED], a[id=HWx5a9WQSMmYa54w1iq4lA] on inactive
[2022-10-17T16:10:19,256][DEBUG][o.a.h.u.PerformanceAdvisory] [bs-dp-aidebugger-dev-001] Both short-circuit local reads and UNIX domain socket are disabled.
[2022-10-17T16:10:19,265][DEBUG][o.a.h.h.p.d.s.DataTransferSaslUtil] [bs-dp-aidebugger-dev-001] DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
[2022-10-17T16:10:19,271][DEBUG][o.e.r.h.HdfsRepository   ] [bs-dp-aidebugger-dev-001] Using file-system [org.apache.hadoop.fs.Hdfs@fc403ee8] for URI [hdfs://10.0.100.7:8020], path [/test202210172]
[2022-10-17T16:10:19,275][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] /test202210172: masked={ masked: rwxr-xr-x, unmasked: rwxrwxrwx }
[2022-10-17T16:10:19,316][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] The ping interval is 60000 ms.
[2022-10-17T16:10:19,318][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] Connecting to /10.0.100.7:8020
[2022-10-17T16:10:19,318][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] Setup connection to /10.0.100.7:8020
[2022-10-17T16:10:19,328][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1819829982) connection to /10.0.100.7:8020 from elasticsearch: starting, having connections 1
[2022-10-17T16:10:19,329][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1819829982) connection to /10.0.100.7:8020 from elasticsearch sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs
[2022-10-17T16:10:19,337][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1819829982) connection to /10.0.100.7:8020 from elasticsearch got value #0
[2022-10-17T16:10:19,338][DEBUG][o.a.h.i.ProtobufRpcEngine2] [bs-dp-aidebugger-dev-001] Call: mkdirs took 47ms
[2022-10-17T16:10:19,343][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] /test202210172/tests-FYdqC14oRki3Wj-BssvCxQ: masked={ masked: rwxr-xr-x, unmasked: rwxrwxrwx }
[2022-10-17T16:10:19,344][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1819829982) connection to /10.0.100.7:8020 from elasticsearch sending #1 org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs
[2022-10-17T16:10:19,345][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1819829982) connection to /10.0.100.7:8020 from elasticsearch got value #1
[2022-10-17T16:10:19,345][DEBUG][o.a.h.i.ProtobufRpcEngine2] [bs-dp-aidebugger-dev-001] Call: mkdirs took 1ms
[2022-10-17T16:10:19,352][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1819829982) connection to /10.0.100.7:8020 from elasticsearch sending #2 org.apache.hadoop.hdfs.protocol.ClientProtocol.getServerDefaults
[2022-10-17T16:10:19,353][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1819829982) connection to /10.0.100.7:8020 from elasticsearch got value #2
[2022-10-17T16:10:19,355][DEBUG][o.a.h.i.ProtobufRpcEngine2] [bs-dp-aidebugger-dev-001] Call: getServerDefaults took 5ms
[2022-10-17T16:10:19,398][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1819829982) connection to /10.0.100.7:8020 from elasticsearch sending #3 org.apache.hadoop.hdfs.protocol.ClientProtocol.create
[2022-10-17T16:10:19,400][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1819829982) connection to /10.0.100.7:8020 from elasticsearch got value #3
[2022-10-17T16:10:19,400][DEBUG][o.a.h.i.ProtobufRpcEngine2] [bs-dp-aidebugger-dev-001] Call: create took 3ms
[2022-10-17T16:10:19,413][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] computePacketChunkSize: src=/test202210172/tests-FYdqC14oRki3Wj-BssvCxQ/data-uKRo8rUxSjSDvTcOwUFF_w.dat, chunkSize=516, chunksPerPacket=126, packetSize=65016
[2022-10-17T16:10:19,419][DEBUG][o.a.h.h.c.i.LeaseRenewer ] [bs-dp-aidebugger-dev-001] Lease renewer daemon for [DFSClient_NONMAPREDUCE_-1499330844_106] with renew id 1 started
[2022-10-17T16:10:19,421][DEBUG][o.a.h.h.DFSClient        ] [bs-dp-aidebugger-dev-001] WriteChunk allocating new packet seqno=0, src=/test202210172/tests-FYdqC14oRki3Wj-BssvCxQ/data-uKRo8rUxSjSDvTcOwUFF_w.dat, packetSize=65016, chunksPerPacket=126, bytesCurBlock=0, DFSOutputStream:block==null
[2022-10-17T16:10:19,421][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] Queued packet seqno: 0 offsetInBlock: 0 lastPacketInBlock: false lastByteOffsetInBlock: 22, block==null
[2022-10-17T16:10:19,421][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] Queued packet seqno: 1 offsetInBlock: 22 lastPacketInBlock: true lastByteOffsetInBlock: 22, block==null
[2022-10-17T16:10:19,421][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] block==null waiting for ack for: 1
[2022-10-17T16:10:19,422][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] stage=PIPELINE_SETUP_CREATE, block==null
[2022-10-17T16:10:19,422][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] Allocating new block: block==null
[2022-10-17T16:10:19,438][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1819829982) connection to /10.0.100.7:8020 from elasticsearch sending #4 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock
[2022-10-17T16:10:19,440][DEBUG][o.a.h.i.Client           ] [bs-dp-aidebugger-dev-001] IPC Client (1819829982) connection to /10.0.100.7:8020 from elasticsearch got value #4
[2022-10-17T16:10:19,441][DEBUG][o.a.h.i.ProtobufRpcEngine2] [bs-dp-aidebugger-dev-001] Call: addBlock took 2ms
[2022-10-17T16:10:19,452][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] pipeline = [DatanodeInfoWithStorage[10.0.100.7:9866,DS-047a3410-6a5f-4381-b4e1-e931df946ec3,DISK], DatanodeInfoWithStorage[10.0.100.9:9866,DS-931adf5b-56cb-4a43-a127-2711091b6a54,DISK]], blk_1073746917_6100
[2022-10-17T16:10:19,453][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] Connecting to datanode 10.0.100.7:9866
[2022-10-17T16:10:19,456][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] Send buf size 291584
[2022-10-17T16:10:19,456][DEBUG][o.a.h.h.p.d.s.SaslDataTransferClient] [bs-dp-aidebugger-dev-001] SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
[2022-10-17T16:10:19,456][DEBUG][o.a.h.h.p.d.s.SaslDataTransferClient] [bs-dp-aidebugger-dev-001] SASL client skipping handshake in unsecured configuration for addr = /10.0.100.7, datanodeId = DatanodeInfoWithStorage[10.0.100.7:9866,DS-047a3410-6a5f-4381-b4e1-e931df946ec3,DISK]
[2022-10-17T16:10:19,497][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] nodes [DatanodeInfoWithStorage[10.0.100.7:9866,DS-047a3410-6a5f-4381-b4e1-e931df946ec3,DISK], DatanodeInfoWithStorage[10.0.100.9:9866,DS-931adf5b-56cb-4a43-a127-2711091b6a54,DISK]] storageTypes [DISK, DISK] storageIDs [DS-047a3410-6a5f-4381-b4e1-e931df946ec3, DS-931adf5b-56cb-4a43-a127-2711091b6a54]
[2022-10-17T16:10:19,498][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] blk_1073746917_6100 sending packet seqno: 0 offsetInBlock: 0 lastPacketInBlock: false lastByteOffsetInBlock: 22
[2022-10-17T16:10:19,499][DEBUG][o.a.h.h.DataStreamer     ] [bs-dp-aidebugger-dev-001] stage=DATA_STREAMING, blk_1073746917_6100
PUT _snapshot/test2
{
  
    "type" : "hdfs",
    "settings" : {
      "path" : "/test202210172",
      "conf" : {
        "dfs.replication" : 1
      },
      "compress" : "true",
      "uri" : "hdfs://10.0.100.7:8020"
    }
  
}
hdfs fsck /test202210172
Connecting to namenode via http://node4:9870/fsck?ugi=elasticsearch&path=%2Ftest202210172
FSCK started by elasticsearch (auth:SIMPLE) from /10.0.100.7 for path /test202210172 at Mon Oct 17 16:15:33 CST 2022


Status: HEALTHY
 Number of data-nodes:  3
 Number of racks:               1
 Total dirs:                    1
 Total symlinks:                0

Replicated Blocks:
 Total size:    0 B
 Total files:   0
 Total blocks (validated):      0
 Minimally replicated blocks:   0
 Over-replicated blocks:        0
 Under-replicated blocks:       0
 Mis-replicated blocks:         0
 Default replication factor:    3
 Average block replication:     0.0
 Missing blocks:                0
 Corrupt blocks:                0
 Missing replicas:              0
 Blocks queued for replication: 0

Erasure Coded Block Groups:
 Total size:    0 B
 Total files:   0
 Total block groups (validated):        0
 Minimally erasure-coded block groups:  0
 Over-erasure-coded block groups:       0
 Under-erasure-coded block groups:      0
 Unsatisfactory placement block groups: 0
 Average block group size:      0.0
 Missing block groups:          0
 Corrupt block groups:          0
 Missing internal blocks:       0
 Blocks queued for replication: 0
FSCK ended at Mon Oct 17 16:15:33 CST 2022 in 1 milliseconds


The filesystem under path '/test202210172' is HEALTHY
 hdfs  dfs -du -s -h  /test202210172
7.0 G  21.1 G  /test202210172

and i found the log

cat aiops.log |grep dfs.replication
[2022-10-17T15:26:29,496][DEBUG][o.e.r.h.HdfsRepository   ] [bs-dp-aidebugger-dev-001] Adding configuration to HDFS Client Configuration : dfs.replication = 1
[2022-10-17T16:10:18,574][DEBUG][o.e.r.h.HdfsRepository   ] [bs-dp-aidebugger-dev-001] Adding configuration to HDFS Client Configuration : dfs.replication = 1
    protected HdfsBlobStore createBlobStore() {
        // initialize our blobstore using elevated privileges.
        SpecialPermission.check();
        final HdfsBlobStore blobStore = AccessController.doPrivileged(
            (PrivilegedAction<HdfsBlobStore>) () -> createBlobstore(uri, pathSetting, getMetadata().settings())
        );

is not use thr confSettings ?

        final Settings confSettings = repositorySettings.getByPrefix("conf.");
        for (String key : confSettings.keySet()) {
            logger.debug("Adding configuration to HDFS Client Configuration : {} = {}", key, confSettings.get(key));
            hadoopConfiguration.set(key, confSettings.get(key));
        }

IMO this proves that the configuration is correctly read by Elasticsearch HDFS plugin and pass to the HDFS client...

Adding configuration to HDFS Client Configuration : dfs.replication = 1

Not sure why this does not work. May be open an issue in Elasticsearch github repo?

please open an issue , My English is bad。thank you!

@james.baiera What do you think? Is there a way to know where this problem is coming from?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.