Unassigned_shards":1

Hi, We use 6.8.1 with 3 cluster and got into this situation due to one of the hosts ran out of storage.

I used the below command to address the read-only index.

curl -X PUT 'http://xxxxxxxxxxxxx:9200/_all/_settings -H 'Content-Type: application/json' -d' {"index.blocks.read_only": false } '

Any help would be much appriciated.

<{"cluster_name":"elasticclusterprod","status":"yellow","timed_out":false,"number_of_nodes":3,"number_of_data_nodes":3,"active_primary_shards":246,"active_shards":491,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":99.79674796747967}/>

I think you may need to do both of these.

PUT /_all/_settings
{
  "index.blocks.read_only_allow_delete": null,
  "index.blocks.read_only": false
}

Also assume you have enough space now?

Thanks of the suggestion. unfortunately, it did not help.

xxxxxxxxxxxxx:9200/_cluster/allocation/explain gives me the result, and translog-1.tlog is missing on the node. can i copy the copy from any one of the other nodes and restart?

/elasticsearch/elasticsearch-6.8.1/data/nodes/0/indices/qWFVysb7Sg65AyHPTJKXsA/0/translog]$ ls
translog.ckp

{"index":"mqm_djg8z5rlx1xnehe93109jk5qm_tr_test_index","shard":0,"primary":false,"current_state":"unassigned","unassigned_info":{"reason":"ALLOCATION_FAILED","at":"2021-02-21T02:29:44.489Z","failed_allocation_attempts":5,"details":"failed shard on node [nyhbyj_LT9S9uuPxboBfag]: failed recovery, failure RecoveryFailedException[[mqm_djg8z5rlx1xnehe93109jk5qm_tr_test_index][0]: Recovery failed from {node-207}{P53GOlmbSmmgfM6VUwxKDQ}{yVyYSaOdRQSYeJFTbEWA5w}{xx.xxx.xx.xxx}{xx.xxx.xx.xxx:zzz}{ml.machine_memory=12405493760, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} into {node-728}{nyhbyj_LT9S9uuPxboBfag}{3XaRctYSQfuBFC1Klif3HQ}{yy.yyyy.yy.yyy}{yy.yyyy.yy.yyy:zzz}{ml.machine_memory=12405485568, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}]; nested: RemoteTransportException[[node-207][xx.xxx.xx.xxx:zzz][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[1] prepare target for translog failed]; nested: RemoteTransportException[[node-728][yy.yyyy.yy.yyy:zzz][internal:index/shard/recovery/prepare_translog]]; nested: TranslogCorruptedException[translog from source [/elasticsearch/elasticsearch-6.8.1/data/nodes/0/indices/qWFVysb7Sg65AyHPTJKXsA/0/translog] is corrupted]; nested: NoSuchFileException[/elasticsearch/elasticsearch-6.8.1/data/nodes/0/indices/qWFVysb7Sg65AyHPTJKXsA/0/translog/translog-1.tlog]; ","last_allocation_status":"no_attempt"},"can_allocate":"no","allocate_explanation":"cannot allocate because allocation is not permitted to any of the nodes","node_allocation_decisions":[{"node_id":"P53GOlmbSmmgfM6VUwxKDQ","node_name":"node-207","transport_address":"xx.xxx.xx.xxx:zzz","node_attributes":{"ml.machine_memory":"12405493760","ml.max_open_jobs":"20","xpack.installed":"true","ml.enabled":"true"},"node_decision":"no","deciders":[{"decider":"max_retry","decision":"NO","explanation":"shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2021-02-21T02:29:44.489Z], failed_attempts[5], delayed=false, details[failed shard on node [nyhbyj_LT9S9uuPxboBfag]: failed recovery, failure RecoveryFailedException[[mqm_djg8z5rlx1xnehe93109jk5qm_tr_test_index][0]: Recovery failed from {node-207}{P53GOlmbSmmgfM6VUwxKDQ}{yVyYSaOdRQSYeJFTbEWA5w}{xx.xxx.xx.xxx}{xx.xxx.xx.xxx:zzz}{ml.machine_memory=12405493760, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} into {node-728}{nyhbyj_LT9S9uuPxboBfag}{3XaRctYSQfuBFC1Klif3HQ}{yy.yyyy.yy.yyy}{yy.yyyy.yy.yyy:zzz}{ml.machine_memory=12405485568, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}]; nested: RemoteTransportException[[node-207][xx.xxx.xx.xxx:zzz][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[1] prepare target for translog failed]; nested: RemoteTransportException[[node-728][yy.yyyy.yy.yyy:zzz][internal:index/shard/recovery/prepare_translog]]; nested: TranslogCorruptedException[translog from source [/elasticsearch/elasticsearch-6.8.1/data/nodes/0/indices/qWFVysb7Sg65AyHPTJKXsA/0/translog] is corrupted]; nested: NoSuchFileException[/elasticsearch/elasticsearch-6.8.1/data/nodes/0/indices/qWFVysb7Sg65AyHPTJKXsA/0/translog/translog-1.tlog]; ], allocation_status[no_attempt]]]"}]},{"node_id":"nyhbyj_LT9S9uuPxboBfag","node_name":"node-728","transport_address":"yy.yyyy.yy.yyy:zzz","node_attributes":

Perhaps this will help

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.