Unable to access elastic console

Hi,

we have already created space on Elasticsearch server but still facing issue.
we also checked for cluster allocation and found below output.

``{
"note" : "No shard was specified in the explain API request, so this response explains a randomly chosen unassigned shard. There may be other unassigned shards in this cluster which cannot be assigned for different reasons. It may not be possible to assign this shard until one of the other shards is assigned correctly. To explain the allocation of other shards (whether assigned or unassigned) you must specify the target shard in the request to this API.",
"index" : "logstash-2022.10.28-000283",
"shard" : 0,
"primary" : true,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "ALLOCATION_FAILED",
"at" : "2022-11-14T09:51:53.196Z",
"failed_allocation_attempts" : 5,
"details" : "failed shard on node [OGDf2uTBSrqMUgxwdNucSg]: failed recovery, failure RecoveryFailedException[[logstash-2022.10.28-000283][0]: Recovery failed on {Prod-ELKPROD}{OGDf2uTBSrqMUgxwdNucSg}{fjOvLonZR16OrB6yF58W6Q}{172.19.1.125}{172.19.1.125:9300}{cdfhilmrstw}{ml.machine_memory=8139542528, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=4072669184}]; nested: IndexShardRecoveryException[failed recovery]; nested: TranslogCorruptedException[translog from source [/elk/elasticsearch/nodes/0/indices/FmuUNJTSTHuSiZ3QENg-rw/0/translog] is corrupted]; nested: NoSuchFileException[/elk/elasticsearch/nodes/0/indices/FmuUNJTSTHuSiZ3QENg-rw/0/translog/translog-6965.tlog]; ",
"last_allocation_status" : "no"
},
"can_allocate" : "yes",
"allocate_explanation" : "can allocate the shard",
"target_node" : {
"id" : "OGDf2uTBSrqMUgxwdNucSg",
"name" : "Prod-ELKPROD",
"transport_address" : "xxx.xxx.xx.xx:9300",
"attributes" : {
"ml.machine_memory" : "8139542528",
"xpack.installed" : "true",
"transform.node" : "true",
"ml.max_open_jobs" : "512",
"ml.max_jvm_size" : "4072669184"
}
},
"allocation_id" : "KsHiCPXCTViZuslB_y_t3A",
"node_allocation_decisions" : [
{
"node_id" : "OGDf2uTBSrqMUgxwdNucSg",
"node_name" : "Prod-ELKPROD",
"transport_address" : "xxx.xxx.xx.xx:9300",
"node_attributes" : {
"ml.machine_memory" : "8139542528",
"xpack.installed" : "true",
"transform.node" : "true",
"ml.max_open_jobs" : "512",
"ml.max_jvm_size" : "4072669184"
},
"node_decision" : "yes",
"store" : {
"in_sync" : true,
"allocation_id" : "KsHiCPXCTViZuslB_y_t3A"
}
}
]
}`

Elasticsearch logs below.

[2022-11-09T16:22:44,539][WARN ][o.e.x.m.e.l.LocalExporter] [Prod-ELKPROD] unexpected error while indexing monitoring document
org.elasticsearch.xpack.monitoring.exporter.ExportException: ClusterBlockException[index [.monitoring-es-7-2022.11.09] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]
at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:135) ~[x-pack-monitoring-7.17.2.jar:7.17.2]
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) ~[?:?]
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ~[?:?]
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) ~[?:?]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:136) [x-pack-monitoring-7.17.2.jar:7.17.2]
at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:117) [x-pack-monitoring-7.17.2.jar:7.17.2]
at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:136) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:31) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:88) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:82) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:31) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$2(SecurityActionFilter.java:192) [x-pack-security-7.17.2.jar:7.17.2]
at org.elasticsearch.action.ActionListener$DelegatingFailureActionListener.onResponse(ActionListener.java:219) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.ActionListener$RunBeforeActionListener.onResponse(ActionListener.java:389) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:101) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:625) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:620) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:97) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.support.ContextPreservingActionListener.onFailure(ContextPreservingActionListener.java:38) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.ActionListener$Delegating.onFailure(ActionListener.java:66) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:1041) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:818) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.2.jar:7.17.2]
at org.elasticsearch.action.support.replication.TransportReplicationAction.runRerouteP

Let me know how i can resolve this issue because at moment my ELK stack is down not able to access the Elasticsearch web console .

Thanks.

This error indicates a data node is critically low on disk space and has reached the flood-stage disk usage watermark. To prevent a full disk, when a node reaches this watermark, Elasticsearch blocks writes to any index with a shard on the node. If the block affects related system indices, Kibana and other Elastic Stack features may become unavailable.

Elasticsearch will automatically remove the write block when the affected node’s disk usage goes below the high disk watermark. To achieve this, Elasticsearch automatically moves some of the affected node’s shards to other nodes in the same data tier.

To immediately restore write operations, you can temporarily increase the disk watermarks and remove the write block.

Follow the instructions here: Fix watermark errors | Elasticsearch Guide [master] | Elastic

hope this resolves the issue.

Thanks
Rashmi

Hi Rashmi,

Thanks for the reply now we are able to access the elasticsearch console and all the index is also working fine except one index it is showing index status RED below is the error detail.

{
  "took": 28,
  "timed_out": false,
  "_shards": {
    "total": 242,
    "successful": 241,
    "skipped": 241,
    "failed": 1,
    "failures": [
      {
        "shard": 0,
        "index": "logstash-2022.10.28-000283",
        "node": null,
        "reason": {
          "type": "no_shard_available_action_exception",
          "reason": null,
          "index_uuid": "FmuUNJTSTHuSiZ3QENg-rw",
          "shard": "0",
          "index": "logstash-2022.10.28-000283"
        }
      }
    ]
  },
  "hits": {
    "total": 0,
    "max_score": 0,
    "hits": []
  }
}
{
  "index.blocks.read_only_allow_delete": "false",
  "index.priority": "1",
  "index.query.default_field": [
    "*"
  ],
  "index.write.wait_for_active_shards": "1",
  "index.routing.allocation.include._tier_preference": "data_content",
  "index.routing.allocation.require._id": "OGDf2uTBSrqMUgxwdNucSg",
  "index.refresh_interval": "5s",
  "index.blocks.write": "true",
  "index.number_of_replicas": "1"
}

Please let me know how i can resolve this issue.

Regards,
Ravi

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.