Hi guys i encountered some error during my restore snapshot.
All begin when i check my cluster health there is unassigned shard after i check it. it was .kibana_alerting_cases_8.8.0_001 and i tried to get information using GET /_cluster/allocation/explain
I got this result
"note": "No shard was specified in the explain API request, so this response explains a randomly chosen unassigned shard. There may be other unassigned shards in this cluster which cannot be assigned for different reasons. It may not be possible to assign this shard until one of the other shards is assigned correctly. To explain the allocation of other shards (whether assigned or unassigned) you must specify the target shard in the request to this API.",
"index": ".kibana_alerting_cases_8.8.0_001",
"shard": 0,
"primary": true,
"current_state": "unassigned",
"unassigned_info": {
"reason": "MANUAL_ALLOCATION",
"at": "2023-06-07T07:40:48.870Z",
"details": """failed shard on node [7xlWWuYzQpqdXZi16hQC1Q]: shard failure, reason [merge failed], failure org.apache.lucene.index.MergePolicy$MergeException: java.lang.IllegalStateException: this writer hit an unrecoverable error; cannot merge
at org.elasticsearch.server@8.8.0/org.elasticsearch.index.engine.InternalEngine$EngineMergeScheduler$2.doRun(InternalEngine.java:2667)
at org.elasticsearch.server@8.8.0/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)
at org.elasticsearch.server@8.8.0/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1623)
Caused by: java.lang.IllegalStateException: this writer hit an unrecoverable error; cannot merge
at org.apache.lucene.core@9.6.0/org.apache.lucene.index.IndexWriter.hasPendingMerges(IndexWriter.java:2402)
at org.elasticsearch.server@8.8.0/org.elasticsearch.index.engine.InternalEngine$EngineMergeScheduler.afterMerge(InternalEngine.java:2625)
at org.elasticsearch.server@8.8.0/org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:123)
at org.apache.lucene.core@9.6.0/org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:700)
Caused by: org.apache.lucene.index.CorruptIndexException: checksum failed (hardware problem?) : expected=8caecba9 actual=bdd5f51 (resource=BufferedChecksumIndexInput(MemorySegmentIndexInput(path="/var/lib/elasticsearch/indices/GB-F-aapS9aS8lcrNikbXw/0/index/_1th6_Lucene90_0.tim")))
at org.apache.lucene.core@9.6.0/org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:440)
at org.apache.lucene.core@9.6.0/org.apache.lucene.codecs.lucene90.Lucene90CompoundFormat.writeCompoundFile(Lucene90CompoundFormat.java:153)
at org.apache.lucene.core@9.6.0/org.apache.lucene.codecs.lucene90.Lucene90CompoundFormat.write(Lucene90CompoundFormat.java:99)
at org.apache.lucene.core@9.6.0/org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:5742)
at org.apache.lucene.core@9.6.0/org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5219)
at org.apache.lucene.core@9.6.0/org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4680)
at org.apache.lucene.core@9.6.0/org.apache.lucene.index.IndexWriter$IndexWriterMergeSource.merge(IndexWriter.java:6432)
at org.apache.lucene.core@9.6.0/org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:639)
at org.elasticsearch.server@8.8.0/org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:118)
... 1 more
""",
"last_allocation_status": "no_valid_shard_copy"
},
"can_allocate": "no_valid_shard_copy",
"allocate_explanation": "Elasticsearch can't allocate this shard because all the copies of its data in the cluster are stale or corrupt. Elasticsearch will allocate this shard when a node containing a good copy of its data joins the cluster. If no such node is available, restore this index from a recent snapshot.",
"node_allocation_decisions": [
{
"node_id": "7xlWWuYzQpqdXZi16hQC1Q",
"node_name": "xenpher",
"transport_address": "127.0.0.1:9300",
"node_attributes": {
"ml.allocated_processors": "4",
"ml.max_jvm_size": "6215958528",
"ml.allocated_processors_double": "4.0",
"xpack.installed": "true",
"ml.machine_memory": "12431904768"
},
"node_decision": "no",
"store": {
"in_sync": true,
"allocation_id": "0yCdkQXRQFaHD6pH-H87-g",
"store_exception": {
"type": "corrupt_index_exception",
"reason": "failed engine (reason: [merge failed]) (resource=preexisting_corruption)",
"caused_by": {
"type": "i_o_exception",
"reason": "failed engine (reason: [merge failed])",
"caused_by": {
"type": "corrupt_index_exception",
"reason": """checksum failed (hardware problem?) : expected=8caecba9 actual=bdd5f51 (resource=BufferedChecksumIndexInput(MemorySegmentIndexInput(path="/var/lib/elasticsearch/indices/GB-F-aapS9aS8lcrNikbXw/0/index/_1th6_Lucene90_0.tim")))"""
}
}
}
}
}
]
}
after that i tried to restore it with my snapshot using this command POST _snapshot/backup1/nightly-snap-2023.06.03-i622btvwsnoiu8vw1odnwq/_restore?pretty { "indices": ".kibana_alerting_cases_8.8.0_001" }
and getting this results
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "requested system indices [.kibana_alerting_cases_8.8.0_001], but system indices can only be restored as part of a feature state"
}
],
"type": "illegal_argument_exception",
"reason": "requested system indices [.kibana_alerting_cases_8.8.0_001], but system indices can only be restored as part of a feature state"
},
"status": 400
}
can anybody help me with this