Failed shard on node [bnK31ibrRG6bSpw_pYK2BA]: shard failure, reason [corrupt file (source: [index id[CTR0CY4BdvDW7Z2cSc8C] origin[PRIMARY] seq#[27209007]])], failure org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed

I have deployed my elasticsearch,kibana and filebeat in kuberntes 3 days before through ECK operator.
i was able to access it through kibana dashboard. but now when i am trying to access this elasticsearch showing status is red.

my elasticsearch configuration

kuberntes cluster - aks cluster(1.26.0)
elastic version - 8.10.4
kibana - 8.10.4
filebeat - 8.10.4

for only elasticsearch i have taken 3 node everu node have 2 cpu, 8 gb memory , master have 50 gb hard disk and data node have 200 gb hard disk.

Error i am getting

curl -u "elastic:password" -k "https://elastic-search-cluster-es-http:9200/_cluster/health"
{"cluster_name":"elastic-search-cluster","status":"red","timed_out":false,"number_of_nodes":3,"number_of_data_nodes":2,"active_primary_shards":148,"active_shards":296,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":8,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":97.36842105263158}


request-
GET _cluster/allocation/explain
{
  "index": "filebeat-2024.03.04",
  "shard": 0,
  "primary": true
}


Response-
{
  "index": "filebeat-2024.03.04",
  "shard": 0,
  "primary": true,
  "current_state": "unassigned",
  "unassigned_info": {
    "reason": "ALLOCATION_FAILED",
    "at": "2024-03-04T12:33:19.751Z",
    "failed_allocation_attempts": 1,
    "details": """failed shard on node [bnK31ibrRG6bSpw_pYK2BA]: shard failure, reason [corrupt file (source: [index id[CTR0CY4BdvDW7Z2cSc8C] origin[PRIMARY] seq#[27209007]])], failure org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
	at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:908)
	at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:921)
	at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1542)
	at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1830)
	at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1470)
	at org.elasticsearch.index.engine.InternalEngine.addDocs(InternalEngine.java:1412)
	at org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:1348)
	at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:1151)
	at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:1052)
	at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:985)
	at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:902)
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:355)
	at org.elasticsearch.action.bulk.TransportShardBulkAction$2.doRun(TransportShardBulkAction.java:219)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:286)
	at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:137)
	at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:74)
	at org.elasticsearch.action.support.replication.TransportWriteAction$1.doRun(TransportWriteAction.java:215)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:33)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.lang.Thread.run(Thread.java:1583)
Caused by: org.apache.lucene.index.CorruptIndexException: compound sub-files must have a valid codec header and footer: file is too small (0 bytes) (resource=BufferedChecksumIndexInput(MemorySegmentIndexInput(path="/usr/share/elasticsearch/data/indices/rWCqYC5IQNeCy8P9I2ptGg/0/index/_2l6.nvd")))
	at org.apache.lucene.codecs.CodecUtil.verifyAndCopyIndexHeader(CodecUtil.java:279)
	at org.apache.lucene.codecs.lucene90.Lucene90CompoundFormat.writeCompoundFile(Lucene90CompoundFormat.java:146)
	at org.apache.lucene.codecs.lucene90.Lucene90CompoundFormat.write(Lucene90CompoundFormat.java:99)
	at org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:5760)
	at org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:546)
	at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:474)
	at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:492)
	at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:671)
	at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3626)
	at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:4061)
	at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:4023)
	at org.elasticsearch.index.engine.InternalEngine.commitIndexWriter(InternalEngine.java:2824)
	at org.elasticsearch.index.engine.InternalEngine.flush(InternalEngine.java:2149)
	at org.elasticsearch.index.shard.IndexShard.flush(IndexShard.java:1406)
	at org.elasticsearch.index.shard.IndexShard$6.doRun(IndexShard.java:3743)
	... 5 more
""",
    "last_allocation_status": "no_valid_shard_copy"
  },
  "can_allocate": "no_valid_shard_copy",
  "allocate_explanation": "Elasticsearch can't allocate this shard because all the copies of its data in the cluster are stale or corrupt. Elasticsearch will allocate this shard when a node containing a good copy of its data joins the cluster. If no such node is available, restore this index from a recent snapshot.",
  "node_allocation_decisions": [
    {
      "node_id": "6PdKYmyEQWSbBbIARcj56g",
      "node_name": "elastic-search-cluster-es-data-0",
      "transport_address": "10.244.5.4:9300",
      "node_attributes": {
        "transform.config_version": "10.0.0",
        "xpack.installed": "true",
        "k8s_node_name": "aks-elasticnode-20666680-vmss000000",
        "ml.config_version": "10.0.0"
      },
      "node_decision": "no",
      "store": {
        "in_sync": false,
        "allocation_id": "pj4S5SgERBePvCw6SjhAxQ"
      }
    },
    {
      "node_id": "bnK31ibrRG6bSpw_pYK2BA",
      "node_name": "elastic-search-cluster-es-data-1",
      "transport_address": "10.244.2.2:9300",
      "node_attributes": {
        "transform.config_version": "10.0.0",
        "xpack.installed": "true",
        "k8s_node_name": "aks-elasticnode-20666680-vmss000002",
        "ml.config_version": "10.0.0"
      },
      "node_decision": "no",
      "store": {
        "in_sync": true,
        "allocation_id": "HgJJIj1dTfacxVnj78HmXA",
        "store_exception": {
          "type": "corrupt_index_exception",
          "reason": "failed engine (reason: [corrupt file (source: [index id[CTR0CY4BdvDW7Z2cSc8C] origin[PRIMARY] seq#[27209007]])]) (resource=preexisting_corruption)",
          "caused_by": {
            "type": "i_o_exception",
            "reason": "failed engine (reason: [corrupt file (source: [index id[CTR0CY4BdvDW7Z2cSc8C] origin[PRIMARY] seq#[27209007]])])",
            "caused_by": {
              "type": "corrupt_index_exception",
              "reason": """compound sub-files must have a valid codec header and footer: file is too small (0 bytes) (resource=BufferedChecksumIndexInput(MemorySegmentIndexInput(path="/usr/share/elasticsearch/data/indices/rWCqYC5IQNeCy8P9I2ptGg/0/index/_2l6.nvd")))"""
            }
          }
        }
      }
    }
  ]
}

what additional details you want please share with me .

this error i am getting from this index. Please help me to solve this issue .

Looks like your storage is not working correctly, see these docs for more info.

Hi @DavidTurner,
i have gone through the docs you have given me. and their the solution it is saying to increase the data node or increase the size.
After increasing the data node i was getting issue like my data node is running and after sometime it is getting failed , while describing the data node i am getting "readiness provbe failed", i have tried 3 to 4 time every time i am getting same issue , so i have to again undo the changes , but now i have added extra storage(previouslu i have 190 gb) to the data node like this ,

- name: data
    count: 2
    podTemplate:
      spec:
        nodeSelector:
           namespace: "elastic-system"
        tolerations:
             - key: "namespace"
               # operator: "Exists" always commneted when applied for the first time
               operator: "Equal"
               value: "elastic-system"  
               effect: "NoSchedule"
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
            runAsUser: 0
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        containers:
        - name: elasticsearch
          readinessProbe:
              exec:
                command:
                - bash
                - -c
                - /mnt/elastic-internal/scripts/readiness-probe-script.sh
              failureThreshold: 3
              initialDelaySeconds: 20
              periodSeconds: 12
              successThreshold: 1
              timeoutSeconds: 12
          env:
          - name: READINESS_PROBE_TIMEOUT
            value: "40"
          - name: ES_JAVA_OPTS
            value: -Xms2g -Xmx2g
          resources:
            requests:
              memory: "1Gi"
              cpu: "100m"
            limits:
              memory: "3000Mi"
    config:
      # On Elasticsearch versions before 7.9.0, replace the node.roles configuration with the following:
      # node.master: false
      # node.data: true
      # node.ingest: true
      # node.ml: true
      # node.transform: true
      node.roles: ["data", "ingest","remote_cluster_client"]
      # node.roles: ["data", "ingest", "ml", "transform"]
      # node.remote_cluster_client: true
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 256Gi
        storageClassName: elk-azurefile-sc

and here you can see i have 2 data node having 256gb memory.

Now everyday i am getting around more than 30 gb memory , which is very high i know but still everyday it is deleteing my index.

And this is a secure elasticsearch and no one having the authntication rather than me .

here what i observe like i am running this

GET _cat/indices?v&s=health:desc,index&h=health,status,index,docs.count,pri,rep

health status index                                                        docs.count pri rep
red    open   filebeat-2024.03.13                                                       1   1
green  open   .internal.alerts-observability.apm.alerts-default-000001              0   1   1
green  open   .internal.alerts-observability.logs.alerts-default-000001             0   1   1
green  open   .internal.alerts-observability.metrics.alerts-default-000001          0   1   1
green  open   .internal.alerts-observability.slo.alerts-default-000001              0   1   1
green  open   .internal.alerts-observability.uptime.alerts-default-000001           0   1   1
green  open   .internal.alerts-security.alerts-default-000001                       0   1   1
green  open   .internal.alerts-stack.alerts-default-000001                          0   1   1
green  open   .kibana-observability-ai-assistant-conversations-000001               0   1   1
green  open   .kibana-observability-ai-assistant-kb-000001                          0   1   1
green  open   elastalert                                                          277   1   1
green  open   elastalert_error                                                   3021   1   1
green  open   elastalert_past                                                       0   1   1
green  open   elastalert_silence                                                  277   1   1
green  open   elastalert_status                                                   471   1   1

regarding disk usage

GET /_cat/allocation?v&s=disk.avail&h=node,disk.percent,disk.avail,disk.total,disk.used,disk.indices,shards&pretty

node                             disk.percent disk.avail disk.total disk.used disk.indices shards
UNASSIGNED                                                                                      4
elastic-search-cluster-es-data-0            2      250gb      256gb     5.9gb        6.4mb     33
elastic-search-cluster-es-data-1            2      250gb      256gb     5.9gb          7mb     33
>  

Regarding culster helath

GET _cluster/health

{
"cluster_name": "elastic-search-cluster",
"status": "red",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 2,
"active_primary_shards": 33,
"active_shards": 66,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 4,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 94.28571428571428
}

here what happens


here you see filebeat-2024.03.13 index showing nothing the data is automatically deleted.

Now if you see filebeat-2024.03.13 index just started beacause this index takes lots of memeory so elasticsearch status got red, for that reason i have to delete the index.

so plaease help me to solve this issue.

There's nothing in those docs saying that.

See these docs.

1 Like

Hi @DavidTurner ,
After going through those docs the issue i am getting is slightly different, so can you help me to solve this issue, This is very priority issue , please give me a solution for this .

GET _cluster/allocation/explain
{
"index": "filebeat-2024.03.18",
"shard": 0,
"primary": false
}

{
  "index": "filebeat-2024.03.18",
  "shard": 0,
  "primary": false,
  "current_state": "unassigned",
  "unassigned_info": {
    "reason": "ALLOCATION_FAILED",
    "at": "2024-03-18T05:41:33.116Z",
    "failed_allocation_attempts": 5,
    "details": """failed shard on node [HYQB8KNQQqC80Z532l-W1A]: failed recovery, failure org.elasticsearch.indices.recovery.RecoveryFailedException: [filebeat-2024.03.18][0]: Recovery failed from {elastic-search-cluster-es-data-1}{-nrKshnxS9mAmaoMnCVTSQ}{GNapxsaJQ-qXcHA4s-vxrA}{elastic-search-cluster-es-data-1}{10.244.3.69}{10.244.3.69:9300}{dir}{8.10.4}{7000099-8100499}{k8s_node_name=aks-elasticnode-20666680-vmss000001, transform.config_version=10.0.0, xpack.installed=true, ml.config_version=10.0.0} into {elastic-search-cluster-es-data-0}{HYQB8KNQQqC80Z532l-W1A}{7cJkNSdsRfSyeQZH8UZRPQ}{elastic-search-cluster-es-data-0}{10.244.5.24}{10.244.5.24:9300}{dir}{8.10.4}{7000099-8100499}{k8s_node_name=aks-elasticnode-20666680-vmss000000, transform.config_version=10.0.0, xpack.installed=true, ml.config_version=10.0.0} (failed to clean after recovery)
	at org.elasticsearch.indices.recovery.RecoveryTarget.lambda$cleanFiles$6(RecoveryTarget.java:545)
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:299)
	at org.elasticsearch.indices.recovery.RecoveryTarget.cleanFiles(RecoveryTarget.java:495)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$3.handleRequest(PeerRecoveryTargetService.java:165)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$3.handleRequest(PeerRecoveryTargetService.java:162)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRequestHandler.messageReceived(PeerRecoveryTargetService.java:614)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRequestHandler.messageReceived(PeerRecoveryTargetService.java:601)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:563)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:625)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:614)
	at org.elasticsearch.xpack.security.authz.AuthorizationService.authorizeSystemUser(AuthorizationService.java:668)
	at org.elasticsearch.xpack.security.authz.AuthorizationService.authorize(AuthorizationService.java:310)
	at org.elasticsearch.xpack.security.transport.ServerTransportFilter.lambda$inbound$1(ServerTransportFilter.java:113)
	at org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)
	at org.elasticsearch.action.ActionListenerImplementations$MappedActionListener.onResponse(ActionListenerImplementations.java:95)
	at org.elasticsearch.xpack.security.authc.AuthenticatorChain.authenticateAsync(AuthenticatorChain.java:94)
	at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:261)
	at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:199)
	at org.elasticsearch.xpack.security.transport.ServerTransportFilter.authenticate(ServerTransportFilter.java:126)
	at org.elasticsearch.xpack.security.transport.ServerTransportFilter.inbound(ServerTransportFilter.java:104)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:636)
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:74)
	at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:294)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.lang.Thread.run(Thread.java:1583)
Caused by: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/data/indices/COenmTwcROWOOKOsdtih5Q/0/index/_42f.nvd
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
	at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:291)
	at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:104)
	at java.nio.file.Files.delete(Files.java:1152)
	at org.apache.lucene.store.FSDirectory.privateDeleteFile(FSDirectory.java:346)
	at org.apache.lucene.store.FSDirectory.deleteFile(FSDirectory.java:311)
	at org.apache.lucene.store.FilterDirectory.deleteFile(FilterDirectory.java:65)
	at org.elasticsearch.index.store.ByteSizeCachingDirectory.deleteFile(ByteSizeCachingDirectory.java:174)
	at org.apache.lucene.store.FilterDirectory.deleteFile(FilterDirectory.java:65)
	at org.elasticsearch.index.store.Store$StoreDirectory.deleteFile(Store.java:759)
	at org.elasticsearch.index.store.Store$StoreDirectory.deleteFile(Store.java:764)
	at org.apache.lucene.store.LockValidatingDirectoryWrapper.deleteFile(LockValidatingDirectoryWrapper.java:37)
	at org.apache.lucene.util.FileDeleter.delete(FileDeleter.java:234)
	at org.apache.lucene.util.FileDeleter.delete(FileDeleter.java:227)
	at org.apache.lucene.util.FileDeleter.deleteFilesIfNoRef(FileDeleter.java:190)
	at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:236)
	at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1157)
	at org.elasticsearch.index.store.Store.newTemporaryAppendingIndexWriter(Store.java:1545)
	at org.elasticsearch.index.store.Store.associateIndexWithNewTranslog(Store.java:1451)
	at org.elasticsearch.indices.recovery.RecoveryTarget.lambda$cleanFiles$6(RecoveryTarget.java:512)
	... 28 more
""",
    "last_allocation_status": "no_attempt"
  },
  "can_allocate": "no",
  "allocate_explanation": "Elasticsearch isn't allowed to allocate this shard to any of the nodes in the cluster. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.",
  "node_allocation_decisions": [
    {
      "node_id": "-nrKshnxS9mAmaoMnCVTSQ",
      "node_name": "elastic-search-cluster-es-data-1",
      "transport_address": "10.244.3.69:9300",
      "node_attributes": {
        "xpack.installed": "true",
        "transform.config_version": "10.0.0",
        "k8s_node_name": "aks-elasticnode-20666680-vmss000001",
        "ml.config_version": "10.0.0"
      },
      "node_decision": "no",
      "deciders": [
        {
          "decider": "max_retry",
          "decision": "NO",
          "explanation": """shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [POST /_cluster/reroute?retry_failed&metric=none] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2024-03-18T05:41:33.116Z], failed_attempts[5], failed_nodes[[HYQB8KNQQqC80Z532l-W1A]], delayed=false, last_node[HYQB8KNQQqC80Z532l-W1A], details[failed shard on node [HYQB8KNQQqC80Z532l-W1A]: failed recovery, failure org.elasticsearch.indices.recovery.RecoveryFailedException: [filebeat-2024.03.18][0]: Recovery failed from {elastic-search-cluster-es-data-1}{-nrKshnxS9mAmaoMnCVTSQ}{GNapxsaJQ-qXcHA4s-vxrA}{elastic-search-cluster-es-data-1}{10.244.3.69}{10.244.3.69:9300}{dir}{8.10.4}{7000099-8100499}{k8s_node_name=aks-elasticnode-20666680-vmss000001, transform.config_version=10.0.0, xpack.installed=true, ml.config_version=10.0.0} into {elastic-search-cluster-es-data-0}{HYQB8KNQQqC80Z532l-W1A}{7cJkNSdsRfSyeQZH8UZRPQ}{elastic-search-cluster-es-data-0}{10.244.5.24}{10.244.5.24:9300}{dir}{8.10.4}{7000099-8100499}{k8s_node_name=aks-elasticnode-20666680-vmss000000, transform.config_version=10.0.0, xpack.installed=true, ml.config_version=10.0.0} (failed to clean after recovery)
	at org.elasticsearch.indices.recovery.RecoveryTarget.lambda$cleanFiles$6(RecoveryTarget.java:545)
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:299)
	at org.elasticsearch.indices.recovery.RecoveryTarget.cleanFiles(RecoveryTarget.java:495)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$3.handleRequest(PeerRecoveryTargetService.java:165)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$3.handleRequest(PeerRecoveryTargetService.java:162)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRequestHandler.messageReceived(PeerRecoveryTargetService.java:614)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRequestHandler.messageReceived(PeerRecoveryTargetService.java:601)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:563)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:625)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:614)
	at org.elasticsearch.xpack.security.authz.AuthorizationService.authorizeSystemUser(AuthorizationService.java:668)
	at org.elasticsearch.xpack.security.authz.AuthorizationService.authorize(AuthorizationService.java:310)
	at org.elasticsearch.xpack.security.transport.ServerTransportFilter.lambda$inbound$1(ServerTransportFilter.java:113)
	at org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)
	at org.elasticsearch.action.ActionListenerImplementations$MappedActionListener.onResponse(ActionListenerImplementations.java:95)
	at org.elasticsearch.xpack.security.authc.AuthenticatorChain.authenticateAsync(AuthenticatorChain.java:94)
	at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:261)
	at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:199)
	at org.elasticsearch.xpack.security.transport.ServerTransportFilter.authenticate(ServerTransportFilter.java:126)
	at org.elasticsearch.xpack.security.transport.ServerTransportFilter.inbound(ServerTransportFilter.java:104)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:636)
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:74)
	at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:294)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.lang.Thread.run(Thread.java:1583)
Caused by: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/data/indices/COenmTwcROWOOKOsdtih5Q/0/index/_42f.nvd
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
	at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:291)
	at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:104)
	at java.nio.file.Files.delete(Files.java:1152)
	at org.apache.lucene.store.FSDirectory.privateDeleteFile(FSDirectory.java:346)
	at org.apache.lucene.store.FSDirectory.deleteFile(FSDirectory.java:311)
	at org.apache.lucene.store.FilterDirectory.deleteFile(FilterDirectory.java:65)
	at org.elasticsearch.index.store.ByteSizeCachingDirectory.deleteFile(ByteSizeCachingDirectory.java:174)
	at org.apache.lucene.store.FilterDirectory.deleteFile(FilterDirectory.java:65)
	at org.elasticsearch.index.store.Store$StoreDirectory.deleteFile(Store.java:759)
	at org.elasticsearch.index.store.Store$StoreDirectory.deleteFile(Store.java:764)
	at org.apache.lucene.store.LockValidatingDirectoryWrapper.deleteFile(LockValidatingDirectoryWrapper.java:37)
	at org.apache.lucene.util.FileDeleter.delete(FileDeleter.java:234)
	at org.apache.lucene.util.FileDeleter.delete(FileDeleter.java:227)
	at org.apache.lucene.util.FileDeleter.deleteFilesIfNoRef(FileDeleter.java:190)
	at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:236)
	at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1157)
	at org.elasticsearch.index.store.Store.newTemporaryAppendingIndexWriter(Store.java:1545)
	at org.elasticsearch.index.store.Store.associateIndexWithNewTranslog(Store.java:1451)
	at org.elasticsearch.indices.recovery.RecoveryTarget.lambda$cleanFiles$6(RecoveryTarget.java:512)
	... 28 more
], allocation_status[no_attempt]]]"""
        },
        {
          "decider": "replica_after_primary_active",
          "decision": "NO",
          "explanation": "primary shard for this replica is not yet active"
        },
        {
          "decider": "throttling",
          "decision": "NO",
          "explanation": "primary shard for this replica is not yet active"
        }
      ]
    },
    {
      "node_id": "HYQB8KNQQqC80Z532l-W1A",
      "node_name": "elastic-search-cluster-es-data-0",
      "transport_address": "10.244.5.24:9300",
      "node_attributes": {
        "transform.config_version": "10.0.0",
        "xpack.installed": "true",
        "k8s_node_name": "aks-elasticnode-20666680-vmss000000",
        "ml.config_version": "10.0.0"
      },
      "node_decision": "no",
      "deciders": [
        {
          "decider": "max_retry",
          "decision": "NO",
          "explanation": """shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [POST /_cluster/reroute?retry_failed&metric=none] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2024-03-18T05:41:33.116Z], failed_attempts[5], failed_nodes[[HYQB8KNQQqC80Z532l-W1A]], delayed=false, last_node[HYQB8KNQQqC80Z532l-W1A], details[failed shard on node [HYQB8KNQQqC80Z532l-W1A]: failed recovery, failure org.elasticsearch.indices.recovery.RecoveryFailedException: [filebeat-2024.03.18][0]: Recovery failed from {elastic-search-cluster-es-data-1}{-nrKshnxS9mAmaoMnCVTSQ}{GNapxsaJQ-qXcHA4s-vxrA}{elastic-search-cluster-es-data-1}{10.244.3.69}{10.244.3.69:9300}{dir}{8.10.4}{7000099-8100499}{k8s_node_name=aks-elasticnode-20666680-vmss000001, transform.config_version=10.0.0, xpack.installed=true, ml.config_version=10.0.0} into {elastic-search-cluster-es-data-0}{HYQB8KNQQqC80Z532l-W1A}{7cJkNSdsRfSyeQZH8UZRPQ}{elastic-search-cluster-es-data-0}{10.244.5.24}{10.244.5.24:9300}{dir}{8.10.4}{7000099-8100499}{k8s_node_name=aks-elasticnode-20666680-vmss000000, transform.config_version=10.0.0, xpack.installed=true, ml.config_version=10.0.0} (failed to clean after recovery)
	at org.elasticsearch.indices.recovery.RecoveryTarget.lambda$cleanFiles$6(RecoveryTarget.java:545)
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:299)
	at org.elasticsearch.indices.recovery.RecoveryTarget.cleanFiles(RecoveryTarget.java:495)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$3.handleRequest(PeerRecoveryTargetService.java:165)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$3.handleRequest(PeerRecoveryTargetService.java:162)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRequestHandler.messageReceived(PeerRecoveryTargetService.java:614)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRequestHandler.messageReceived(PeerRecoveryTargetService.java:601)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:563)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:625)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:614)
	at org.elasticsearch.xpack.security.authz.AuthorizationService.authorizeSystemUser(AuthorizationService.java:668)
	at org.elasticsearch.xpack.security.authz.AuthorizationService.authorize(AuthorizationService.java:310)
	at org.elasticsearch.xpack.security.transport.ServerTransportFilter.lambda$inbound$1(ServerTransportFilter.java:113)
	at org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)
	at org.elasticsearch.action.ActionListenerImplementations$MappedActionListener.onResponse(ActionListenerImplementations.java:95)
	at org.elasticsearch.xpack.security.authc.AuthenticatorChain.authenticateAsync(AuthenticatorChain.java:94)
	at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:261)
	at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:199)
	at org.elasticsearch.xpack.security.transport.ServerTransportFilter.authenticate(ServerTransportFilter.java:126)
	at org.elasticsearch.xpack.security.transport.ServerTransportFilter.inbound(ServerTransportFilter.java:104)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:636)
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:74)
	at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:294)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.lang.Thread.run(Thread.java:1583)
Caused by: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/data/indices/COenmTwcROWOOKOsdtih5Q/0/index/_42f.nvd
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
	at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:291)
	at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:104)
	at java.nio.file.Files.delete(Files.java:1152)
	at org.apache.lucene.store.FSDirectory.privateDeleteFile(FSDirectory.java:346)
	at org.apache.lucene.store.FSDirectory.deleteFile(FSDirectory.java:311)
	at org.apache.lucene.store.FilterDirectory.deleteFile(FilterDirectory.java:65)
	at org.elasticsearch.index.store.ByteSizeCachingDirectory.deleteFile(ByteSizeCachingDirectory.java:174)
	at org.apache.lucene.store.FilterDirectory.deleteFile(FilterDirectory.java:65)
	at org.elasticsearch.index.store.Store$StoreDirectory.deleteFile(Store.java:759)
	at org.elasticsearch.index.store.Store$StoreDirectory.deleteFile(Store.java:764)
	at org.apache.lucene.store.LockValidatingDirectoryWrapper.deleteFile(LockValidatingDirectoryWrapper.java:37)
	at org.apache.lucene.util.FileDeleter.delete(FileDeleter.java:234)
	at org.apache.lucene.util.FileDeleter.delete(FileDeleter.java:227)
	at org.apache.lucene.util.FileDeleter.deleteFilesIfNoRef(FileDeleter.java:190)
	at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:236)
	at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1157)
	at org.elasticsearch.index.store.Store.newTemporaryAppendingIndexWriter(Store.java:1545)
	at org.elasticsearch.index.store.Store.associateIndexWithNewTranslog(Store.java:1451)
	at org.elasticsearch.indices.recovery.RecoveryTarget.lambda$cleanFiles$6(RecoveryTarget.java:512)
	... 28 more
], allocation_status[no_attempt]]]"""
        },
        {
          "decider": "replica_after_primary_active",
          "decision": "NO",
          "explanation": "primary shard for this replica is not yet active"
        },
        {
          "decider": "throttling",
          "decision": "NO",
          "explanation": "primary shard for this replica is not yet active"
        }
      ]
    }
  ]
}

But if you see their is an explantion for a issue like that in a docs you have provided me.

{
  "index" : "my-index-000001",
  "shard" : 0,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "NODE_LEFT",
    "at" : "2017-01-04T18:03:28.464Z",
    "details" : "node_left[OIWe8UhhThCK0V5XfmdrmQ]",
    "last_allocation_status" : "no_valid_shard_copy"
  },
  "can_allocate" : "no_valid_shard_copy",
  "allocate_explanation" : "Elasticsearch can't allocate this shard because there are no copies of its data in the cluster. Elasticsearch will allocate this shard when a node holding a good copy of its data joins the cluster. If no such node is available, restore this index from a recent snapshot."
}

but here in the docs if a check here reason is showing

 "unassigned_info" : {
    "reason" : "NODE_LEFT",
    "at" : "2017-01-04T18:03:28.464Z",
    "details" : "node_left[OIWe8UhhThCK0V5XfmdrmQ]",
 "last_allocation_status" : "no_valid_shard_copy"
  },
  "can_allocate" : "no_valid_shard_copy",
  "allocate_explanation" : "Elasticsearch can't allocate this shard because there are no copies of its data in the cluster. Elasticsearch will allocate this shard when a node holding a good copy of its data joins the cluster. If no such node is available, restore this index from a recent snapshot."
}

but the reason i am getting

"unassigned_info": {
    "reason": "ALLOCATION_FAILED",
    "at": "2024-03-18T05:41:33.116Z",
    "failed_allocation_attempts": 5,
    "details": """failed shard on node [HYQB8KNQQqC80Z532l-W1A]: failed recovery, failure org.elasticsearch.indices.recovery.RecoveryFailedException: [filebeat-2024.03.18][0]: Recovery failed from {elastic-search-cluster-es-data-1}{-nrKshnxS9mAmaoMnCVTSQ}{GNapxsaJQ-qXcHA4s-vxrA}{elastic-search-cluster-es-data-1}{10.244.3.69}{10.244.3.69:9300}{dir}{8.10.4}{7000099-8100499}{k8s_node_name=aks-elasticnode-20666680-vmss000001, transform.config_version=10.0.0, xpack.installed=true, ml.config_version=10.0.0} into {elastic-search-cluster-es-data-0}{HYQB8KNQQqC80Z532l-W1A}{7cJkNSdsRfSyeQZH8UZRPQ}{elastic-search-cluster-es-data-0}{10.244.5.24}{10.244.5.24:9300}{dir}{8.10.4}{7000099-8100499}{k8s_node_name=aks-elasticnode-20666680-vmss000000, transform.config_version=10.0.0, xpack.installed=true, ml.config_version=10.0.0} (failed to clean after recovery)
	at org.elasticsearch.indices.recovery.RecoveryTarget.lambda$cleanFiles$6(RecoveryTarget.java:545)
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:299)
	at org.elasticsearch.indices.recovery.RecoveryTarget.cleanFiles(RecoveryTarget.java:495)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$3.handleRequest(PeerRecoveryTargetService.java:165)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$3.handleRequest(PeerRecoveryTargetService.java:162)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRequestHandler.messageReceived(PeerRecoveryTargetService.java:614)
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRequestHandler.messageReceived(PeerRecoveryTargetService.java:601)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:563)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:625)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:614)
	at org.elasticsearch.xpack.security.authz.AuthorizationService.authorizeSystemUser(AuthorizationService.java:668)
	at org.elasticsearch.xpack.security.authz.AuthorizationService.authorize(AuthorizationService.java:310)
	at org.elasticsearch.xpack.security.transport.ServerTransportFilter.lambda$inbound$1(ServerTransportFilter.java:113)
	at org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)
	at org.elasticsearch.action.ActionListenerImplementations$MappedActionListener.onResponse(ActionListenerImplementations.java:95)
	at org.elasticsearch.xpack.security.authc.AuthenticatorChain.authenticateAsync(AuthenticatorChain.java:94)
	at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:261)
	at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:199)
	at org.elasticsearch.xpack.security.transport.ServerTransportFilter.authenticate(ServerTransportFilter.java:126)
	at org.elasticsearch.xpack.security.transport.ServerTransportFilter.inbound(ServerTransportFilter.java:104)
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:636)
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:74)
	at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:294)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.lang.Thread.run(Thread.java:1583)
Caused by: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/data/indices/COenmTwcROWOOKOsdtih5Q/0/index/_42f.nvd
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
	at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:291)
	at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:104)
	at java.nio.file.Files.delete(Files.java:1152)
	at org.apache.lucene.store.FSDirectory.privateDeleteFile(FSDirectory.java:346)
	at org.apache.lucene.store.FSDirectory.deleteFile(FSDirectory.java:311)
	at org.apache.lucene.store.FilterDirectory.deleteFile(FilterDirectory.java:65)
	at org.elasticsearch.index.store.ByteSizeCachingDirectory.deleteFile(ByteSizeCachingDirectory.java:174)
	at org.apache.lucene.store.FilterDirectory.deleteFile(FilterDirectory.java:65)
	at org.elasticsearch.index.store.Store$StoreDirectory.deleteFile(Store.java:759)
	at org.elasticsearch.index.store.Store$StoreDirectory.deleteFile(Store.java:764)
	at org.apache.lucene.store.LockValidatingDirectoryWrapper.deleteFile(LockValidatingDirectoryWrapper.java:37)
	at org.apache.lucene.util.FileDeleter.delete(FileDeleter.java:234)
	at org.apache.lucene.util.FileDeleter.delete(FileDeleter.java:227)
	at org.apache.lucene.util.FileDeleter.deleteFilesIfNoRef(FileDeleter.java:190)
	at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:236)
	at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1157)
	at org.elasticsearch.index.store.Store.newTemporaryAppendingIndexWriter(Store.java:1545)
	at org.elasticsearch.index.store.Store.associateIndexWithNewTranslog(Store.java:1451)
	at org.elasticsearch.indices.recovery.RecoveryTarget.lambda$cleanFiles$6(RecoveryTarget.java:512)
	... 28 more
""",
    "last_allocation_status": "no_attempt"
  },
  "can_allocate": "no",
  "allocate_explanation": "Elasticsearch isn't allowed to allocate this shard to any of the nodes in the cluster. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.",

please give me a solution to resolve this,

Please note that this is a community forum where everyone volunteers their time. It is not a support forum with SLAs or even guarantees of a resolution.

As David has pointed out it seems like the storage you are using is not suitable for Elasticsearch as it does not behave like a local disk. This has led to some shard(s) being corrupted. Based on this I would recommend the following:

  1. Change the type of storage you are using for the cluster. Based on naming it seems you might be using Azure Files, which I believe is not suitable for Elasticsearch. If this is the case I would recommend switching to Azure Managed Disk.

  2. As the index has been corrupted I would recommend deleting it and restoring it from a snapshot. If you do not have any snapshot you have likely lost the data and will need to force allocation of the corrupted shard as an empty shard as per the instructions towards the end of these docs. This means you permanently lose any data that was in this shard. If the index only has one primary shard you can also just as well delete the index.

Yeah this line indicates that the problem is outside of ES:

We're seeing this file on disk but then when we come to delete it, it's already gone. That means either the file wasn't there to start with (so the listing was wrong) or else something other than ES deleted the file out from under us (which is not permitted).

Hi @DavidTurner ,

Can i try @Christian_Dahlqvist solutions , will it solve my issue,

this solutions

Change the type of storage you are using for the cluster. Based on naming it seems you might be using Azure Files, which I believe is not suitable for Elasticsearch. If this is the case I would recommend switching to Azure Managed Disk.

As the index has been corrupted I would recommend deleting it and restoring it from a snapshot. If you do not have any snapshot you have likely lost the data and will need to force allocation of the corrupted shard as an empty shard as per the instructions towards the end of these docs.

You can certainly try that, I can't offer any guarantees about whether it'll solve the issue or not.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.