ES cluster (single node ) in RED due to unassigned shards

ES cluster (single node ) in RED due to unassigned shards

Here are some details

1) List of unassigned shards

.ds-.logs-deprecation.elasticsearch-default-2022.06.07-000003 0 p UNASSIGNED CLUSTER_RECOVERED
fs_audit_log_files_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.05-000001 1 p UNASSIGNED CLUSTER_RECOVERED
.ds-.logs-deprecation.elasticsearch-default-2022.06.23-000004 0 p UNASSIGNED CLUSTER_RECOVERED
fs_audit_log_files_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 1 p UNASSIGNED CLUSTER_RECOVERED
fs_audit_log_files_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 0 p UNASSIGNED CLUSTER_RECOVERED
fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.05-000001 1 p UNASSIGNED CLUSTER_RECOVERED
fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.05-000001 0 p UNASSIGNED CLUSTER_RECOVERED
.ds-ilm-history-5-2022.05.10-000001 0 p UNASSIGNED CLUSTER_RECOVERED
.ds-.logs-deprecation.elasticsearch-default-2022.05.24-000002 0 p UNASSIGNED CLUSTER_RECOVERED
.ds-.logs-deprecation.elasticsearch-default-2022.05.10-000001 0 p UNASSIGNED CLUSTER_RECOVERED
fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 2 p UNASSIGNED CLUSTER_RECOVERED
fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 1 p UNASSIGNED CLUSTER_RECOVERED
fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 4 p UNASSIGNED CLUSTER_RECOVERED
fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 3 p UNASSIGNED CLUSTER_RECOVERED
fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 0 p UNASSIGNED CLUSTER_RECOVERED
fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 1 p UNASSIGNED CLUSTER_RECOVERED
4k fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 0 p UNASSIGNED CLUSTER_RECOVERED
0 --:--:-- --:--:-- --:--:-- 1477k
fs_users_560fc8f4-1f94-4a6f-b517-b4370d00e550 0 p UNASSIGNED ALLOCATION_FAILED
ilm-history-2-000005 0 p UNASSIGNED CLUSTER_RECOVERED
.ds-ilm-history-5-2022.06.09-000002 0 p UNASSIGNED CLUSTER_RECOVERED

2) Reason for Allocation Failure for each index and shard

shard : 0 & 1 index : fs_audit_log_files_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 Reason : cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

shard : 1 index : fs_users_560fc8f4-1f94-4a6f-b517-b4370d00e550 Reason : no such shard exception

shard : 0 index : fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 Reason : index UUID in shard state was: go2nbXWPQcOrIm4qSCoLWg expected: piKm9cDxQHCJaAAhyl5rLg on shard path: /var/lib/elasticsearch/nodes/0/indices/piKm9cDxQHCJaAAhyl5rLg/0

shard : 1 index : fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 Reason : cannot allocate because information about existing shard data is still being retrieved from some of the nodes

shard : 2 index : fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 Reason: cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

shard : 3 index : fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 Reason : [fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550][3] index UUID in shard state was: LHd8zcQzSM-1hii_Ml3Jng expected: piKm9cDxQHCJaAAhyl5rLg on shard path: /var/lib/elasticsearch/nodes/0/indices/piKm9cDxQHCJaAAhyl5rLg/3

shard :4 index : fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 Reason : cannot allocate because all found copies of the shard are either stale or corrupt

shard 0: index : fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 Reason : fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001][0] index UUID in shard state was: LuAEl5GbQKmlZtzaiRzoXA expected: 3lE1Tk4JTYCfVoIp4OnEqg on shard path: /var/lib/elasticsearch/nodes/0/indices/3lE1Tk4JTYCfVoIp4OnEqg/0

shard : 1 index : fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 Reason : cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

shard : 0 index : fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.05-000001 Reason : cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

shard : 1 index : fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.05-000001 Reason : cannot allocate because all found copies of the shard are either stale or corrupt

shard : 1 index : fs_audit_log_files_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.05-000001 Reason : cannot allocate because all found copies of the shard are either stale or corrupt

shard : 0 index : .ds-ilm-history-5-2022.05.10-000001 Reason : [.ds-ilm-history-5-2022.05.10-000001][0] index UUID in shard state was: 3mUO113ES22qeEDguuH-VA expected: S4NQUjW2QP6JaGaSOx8KSA on shard path: /var/lib/elasticsearch/nodes/0/indices/S4NQUjW2QP6JaGaSOx8KSA/0

shard : 0 index : .ds-.logs-deprecation.elasticsearch-default-2022.05.10-000001 Reason : .ds-.logs-deprecation.elasticsearch-default-2022.05.10-000001][0] index UUID in shard state was: zG1S6f3XRlydpl7oDu1r_Q expected: MAoeLsVrSZ-R0LJjOKbcxg on shard path: /var/lib/elasticsearch/nodes/0/indices/MAoeLsVrSZ-R0LJjOKbcxg/0

shard : 1 index : .ds-.logs-deprecation.elasticsearch-default-2022.05.10-000001 Reason : shard_not_found_exception

shard : 0 : index : ilm-history-2-000005 Reason : cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

shard : 0 : index : .ds-ilm-history-5-2022.06.09-000002 Reason : .ds-ilm-history-5-2022.06.09-000002][0] index UUID in shard state was: nfOaT915QYuZ1y9YT9_b_g expected: LHd8zcQzSM-1hii_Ml3Jng on shard path: /var/lib/elasticsearch/nodes/0/indices/LHd8zcQzSM-1hii_Ml3Jng/0

shard : 0 index : .ds-ilm-history-5-2022.06.09-000002 Reason : cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

shard : 0 index : .ds-.logs-deprecation.elasticsearch-default-2022.06.23-000004 Reason : [.ds-.logs-deprecation.elasticsearch-default-2022.06.23-000004][0] index UUID in shard state was: wfz5xZLtQlOrss11GQO6kQ expected: go2nbXWPQcOrIm4qSCoLWg on shard path: /var/lib/elasticsearch/nodes/0/indices/go2nbXWPQcOrIm4qSCoLWg/0

3) Snippet from the log files (elasticsearch.out)
,,,
Caused by: org.elasticsearch.transport.RemoteTransportException: [75296278eeb2][172.28.5.1:9300][internal:gateway/local/started_shards[n]]
Caused by: org.elasticsearch.ElasticsearchException: failed to load started shards
at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:185) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:52) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:191) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:305) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:299) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:67) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:1045) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-7.16.3.jar:7.16.3]
... 3 more
Caused by: org.elasticsearch.ElasticsearchException: java.io.IOException: failed to read /var/lib/elasticsearch/nodes/0/indices/3mUO113ES22qeEDguuH-VA/0/_state/state-23.st
at org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:159) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.MetadataStateFormat.loadGeneration(MetadataStateFormat.java:414) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.MetadataStateFormat.loadLatestStateWithGeneration(MetadataStateFormat.java:435) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.MetadataStateFormat.loadLatestState(MetadataStateFormat.java:460) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:129) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:52) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:191) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:305) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:299) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:67) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:1045) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-7.16.3.jar:7.16.3]
... 3 more
Caused by: java.io.IOException: failed to read /var/lib/elasticsearch/nodes/0/indices/3mUO113ES22qeEDguuH-VA/0/_state/state-23.st
at org.elasticsearch.gateway.MetadataStateFormat.loadGeneration(MetadataStateFormat.java:409) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.MetadataStateFormat.loadLatestStateWithGeneration(MetadataStateFormat.java:435) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.MetadataStateFormat.loadLatestState(MetadataStateFormat.java:460) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:129) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:52) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:191) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:305) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:299) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:67) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:1045) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-7.16.3.jar:7.16.3]
... 3 more
'''
What options are available to me to get back ES to a working state ? Please note that it is an one node cluster so re-routing the shard/index to another node will not work . What other options might be available to me

It's incredibly hard to follow this post due to your formatting. I would strongly suggest formatting your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you.

Please find the reformatted content

ES cluster (single node ) in RED due to unassigned shards

Here are some details

1) List of unassigned shards

.ds-.logs-deprecation.elasticsearch-default-2022.06.07-000003 0 p UNASSIGNED CLUSTER_RECOVERED
fs_audit_log_files_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.05-000001 1 p UNASSIGNED CLUSTER_RECOVERED
.ds-.logs-deprecation.elasticsearch-default-2022.06.23-000004 0 p UNASSIGNED CLUSTER_RECOVERED
fs_audit_log_files_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 1 p UNASSIGNED CLUSTER_RECOVERED
fs_audit_log_files_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 0 p UNASSIGNED CLUSTER_RECOVERED
fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.05-000001 1 p UNASSIGNED CLUSTER_RECOVERED
fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.05-000001 0 p UNASSIGNED CLUSTER_RECOVERED
.ds-ilm-history-5-2022.05.10-000001 0 p UNASSIGNED CLUSTER_RECOVERED
.ds-.logs-deprecation.elasticsearch-default-2022.05.24-000002 0 p UNASSIGNED CLUSTER_RECOVERED
.ds-.logs-deprecation.elasticsearch-default-2022.05.10-000001 0 p UNASSIGNED CLUSTER_RECOVERED
fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 2 p UNASSIGNED CLUSTER_RECOVERED
fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 1 p UNASSIGNED CLUSTER_RECOVERED
fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 4 p UNASSIGNED CLUSTER_RECOVERED
fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 3 p UNASSIGNED CLUSTER_RECOVERED
fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 0 p UNASSIGNED CLUSTER_RECOVERED
fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 1 p UNASSIGNED CLUSTER_RECOVERED
4k fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 0 p UNASSIGNED CLUSTER_RECOVERED
0 --:--:-- --:--:-- --:--:-- 1477k
fs_users_560fc8f4-1f94-4a6f-b517-b4370d00e550 0 p UNASSIGNED ALLOCATION_FAILED
ilm-history-2-000005 0 p UNASSIGNED CLUSTER_RECOVERED
.ds-ilm-history-5-2022.06.09-000002 0 p UNASSIGNED CLUSTER_RECOVERED

2) Reason for Allocation Failure for each index and shard

shard : 0 & 1 index : fs_audit_log_files_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 Reason : cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

shard : 1 index : fs_users_560fc8f4-1f94-4a6f-b517-b4370d00e550 Reason : no such shard exception

shard : 0 index : fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 Reason : index UUID in shard state was: go2nbXWPQcOrIm4qSCoLWg expected: piKm9cDxQHCJaAAhyl5rLg on shard path: /var/lib/elasticsearch/nodes/0/indices/piKm9cDxQHCJaAAhyl5rLg/0

shard : 1 index : fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 Reason : cannot allocate because information about existing shard data is still being retrieved from some of the nodes

shard : 2 index : fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 Reason: cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

shard : 3 index : fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 Reason : [fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550][3] index UUID in shard state was: LHd8zcQzSM-1hii_Ml3Jng expected: piKm9cDxQHCJaAAhyl5rLg on shard path: /var/lib/elasticsearch/nodes/0/indices/piKm9cDxQHCJaAAhyl5rLg/3

shard :4 index : fs_files_560fc8f4-1f94-4a6f-b517-b4370d00e550 Reason : cannot allocate because all found copies of the shard are either stale or corrupt

shard 0: index : fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 Reason : fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001][0] index UUID in shard state was: LuAEl5GbQKmlZtzaiRzoXA expected: 3lE1Tk4JTYCfVoIp4OnEqg on shard path: /var/lib/elasticsearch/nodes/0/indices/3lE1Tk4JTYCfVoIp4OnEqg/0

shard : 1 index : fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.06-000001 Reason : cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

shard : 0 index : fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.05-000001 Reason : cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

shard : 1 index : fs_audit_log_folders_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.05-000001 Reason : cannot allocate because all found copies of the shard are either stale or corrupt

shard : 1 index : fs_audit_log_files_560fc8f4-1f94-4a6f-b517-b4370d00e550-2022.05-000001 Reason : cannot allocate because all found copies of the shard are either stale or corrupt

shard : 0 index : .ds-ilm-history-5-2022.05.10-000001 Reason : [.ds-ilm-history-5-2022.05.10-000001][0] index UUID in shard state was: 3mUO113ES22qeEDguuH-VA expected: S4NQUjW2QP6JaGaSOx8KSA on shard path: /var/lib/elasticsearch/nodes/0/indices/S4NQUjW2QP6JaGaSOx8KSA/0

shard : 0 index : .ds-.logs-deprecation.elasticsearch-default-2022.05.10-000001 Reason : .ds-.logs-deprecation.elasticsearch-default-2022.05.10-000001][0] index UUID in shard state was: zG1S6f3XRlydpl7oDu1r_Q expected: MAoeLsVrSZ-R0LJjOKbcxg on shard path: /var/lib/elasticsearch/nodes/0/indices/MAoeLsVrSZ-R0LJjOKbcxg/0

shard : 1 index : .ds-.logs-deprecation.elasticsearch-default-2022.05.10-000001 Reason : shard_not_found_exception

shard : 0 : index : ilm-history-2-000005 Reason : cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

shard : 0 : index : .ds-ilm-history-5-2022.06.09-000002 Reason : .ds-ilm-history-5-2022.06.09-000002][0] index UUID in shard state was: nfOaT915QYuZ1y9YT9_b_g expected: LHd8zcQzSM-1hii_Ml3Jng on shard path: /var/lib/elasticsearch/nodes/0/indices/LHd8zcQzSM-1hii_Ml3Jng/0

shard : 0 index : .ds-ilm-history-5-2022.06.09-000002 Reason : cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

shard : 0 index : .ds-.logs-deprecation.elasticsearch-default-2022.06.23-000004 Reason : [.ds-.logs-deprecation.elasticsearch-default-2022.06.23-000004][0] index UUID in shard state was: wfz5xZLtQlOrss11GQO6kQ expected: go2nbXWPQcOrIm4qSCoLWg on shard path: /var/lib/elasticsearch/nodes/0/indices/go2nbXWPQcOrIm4qSCoLWg/0

3) Snippet from the log files (elasticsearch.out)

Caused by: org.elasticsearch.transport.RemoteTransportException: [75296278eeb2][172.28.5.1:9300][internal:gateway/local/started_shards[n]]
Caused by: org.elasticsearch.ElasticsearchException: failed to load started shards
at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:185) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:52) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:191) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:305) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:299) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:67) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:1045) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-7.16.3.jar:7.16.3]
... 3 more
Caused by: org.elasticsearch.ElasticsearchException: java.io.IOException: failed to read /var/lib/elasticsearch/nodes/0/indices/3mUO113ES22qeEDguuH-VA/0/_state/state-23.st
at org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:159) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.MetadataStateFormat.loadGeneration(MetadataStateFormat.java:414) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.MetadataStateFormat.loadLatestStateWithGeneration(MetadataStateFormat.java:435) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.MetadataStateFormat.loadLatestState(MetadataStateFormat.java:460) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:129) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:52) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:191) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:305) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:299) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:67) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:1045) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-7.16.3.jar:7.16.3]
... 3 more
Caused by: java.io.IOException: failed to read /var/lib/elasticsearch/nodes/0/indices/3mUO113ES22qeEDguuH-VA/0/_state/state-23.st
at org.elasticsearch.gateway.MetadataStateFormat.loadGeneration(MetadataStateFormat.java:409) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.MetadataStateFormat.loadLatestStateWithGeneration(MetadataStateFormat.java:435) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.MetadataStateFormat.loadLatestState(MetadataStateFormat.java:460) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:129) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:52) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:191) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:305) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:299) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:67) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:1045) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777) ~[elasticsearch-7.16.3.jar:7.16.3]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-7.16.3.jar:7.16.3]
... 3 more

It looks like you have some pretty serious issues there. Did your hosts suffer from some sort of disk failure?

Yes , there were iSCSI connection drops to the disk target . Could that have led to this issue ? If yes , under what scenarios an iSCSI connection drop to the target disk device can lead to such issues ?

Definitely. Any disk issues will cause problems.

Given the state of your cluster and the fact that you only have a single node I suspect you will need to restore data from a recent snapshot in order to resolve these issues.

Thanks for your responses

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.