Kibana pod stopped after restoring the index pattern

Hi Team,

We have deleted the data from index in elasticvue , we had backup so we have restored the index pattern now kibana pod not running after restoring .
please find the kibana pod logs

{"type":"log","@timestamp":"2022-12-01T07:34:14+00:00","tags":["warning","environment"],"pid":1213,"message":"Detected an unhandled Promise rejection.\n{"error":{"root_cause":[{"type":"unsupported_operation_exception","reason":"unsupported_operation_exception: _source only indices can't be searched or filtered"}],"type":"engine_exception","reason":"Couldn't resolve version","index_uuid":"2_0D1vGaSGuL5630f2tX5w","shard":"0","index":".kibana_7.14.0_001","caused_by":{"type":"unsupported_operation_exception","reason":"unsupported_operation_exception: _source only indices can't be searched or filtered"}},"status":500}"}
{"type":"log","@timestamp":"2022-12-01T07:34:18+00:00","tags":["error","plugins","spaces"],"pid":1213,"message":"Unable to navigate to space "default". {"error":{"root_cause":[{"type":"unsupported_operation_exception","reason":"unsupported_operation_exception: _source only indices can't be searched or filtered"}],"type":"engine_exception","reason":"Couldn't resolve version","index_uuid":"2_0D1vGaSGuL5630f2tX5w","shard":"0","index":".kibana_7.14.0_001","caused_by":{"type":"unsupported_operation_exception","reason":"unsupported_operation_exception: _source only indices can't be searched or filtered"}},"status":500}"}
{"type":"error","@timestamp":"2022-12-01T07:34:18+00:00","tags":,"pid":1213,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Error: Internal Server Error\n at HapiResponseAdapter.toError (/usr/share/kibana/src/core/server/http/router/response_adapter.js:128:19)\n at HapiResponseAdapter.toHapiResponse (/usr/share/kibana/src/core/server/http/router/response_adapter.js:82:19)\n at HapiResponseAdapter.handle (/usr/share/kibana/src/core/server/http/router/response_adapter.js:73:17)\n at interceptRequest (/usr/share/kibana/src/core/server/http/lifecycle/on_post_auth.js:58:36)\n at runMicrotasks ()\n at processTicksAndRejections (internal/process/task_queues.js:95:5)\n at exports.Manager.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/toolkit.js:60:28)\n at Request._invoke (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:397:30)\n at Request._lifecycle (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:370:32)\n at Request._execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:279:9)"},"url":"

How did you take a backup?

Hey There!
Backup has took on Kibana UI as a snapshot. Later the same backup file is able to see in Elasticeuv. We have selected the snapshot and the index which wanted to be restore. After restore is successfull but suddenly Elasticeuv health became red and the kibana pod become a non running state.

Here we need, is there any way that we can take a backup of the ndjson files of all our Kibana metrixes?

Please reply ASAP, our production dashboard went down

Welcome to our community! :smiley:

We are happy to help but it's best efforts and we don't provide SLAs here sorry.

What do the Elasticsearch logs show?

Please find the logs:

[lciadm100@atvts2469 deployment]$ kubectl logs elasticsearch-master-0 -n reg-elk | less
{"type": "server", "timestamp": "2022-12-01T16:58:11,893Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "path: /release-staging-data/_doc/01GK79Z8QQKQ2VH5GBD6V9WQAW, params: {index=release-staging-data, id=01GK79Z8QQKQ2VH5GBD6V9WQAW}", "cluster.uuid": "SoFn84o9SIaOu_R2_uwXgQ", "node.id": "Q5uzalvVSQW66h6ej_hK7Q" ,
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [elasticsearch-master-0][192.168.54.177:9300][indices:data/read/get[s]]",
"Caused by: org.elasticsearch.index.engine.EngineException: Couldn't resolve version",
"at org.elasticsearch.index.engine.Engine.getFromSearcher(Engine.java:578) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.engine.ReadOnlyEngine.get(ReadOnlyEngine.java:261) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.shard.IndexShard.get(IndexShard.java:1019) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.get.ShardGetService.innerGet(ShardGetService.java:170) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.get.ShardGetService.get(ShardGetService.java:94) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.get.ShardGetService.get(ShardGetService.java:85) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.get.TransportGetAction.shardOperation(TransportGetAction.java:98) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.get.TransportGetAction.shardOperation(TransportGetAction.java:35) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.lambda$asyncShardOperation$0(TransportSingleShardAction.java:99) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.ActionRunnable.lambda$supply$0(ActionRunnable.java:47) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:62) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:732) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.14.0.jar:7.14.0]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:831) [?:?]",
"Caused by: java.lang.UnsupportedOperationException: _source only indices can't be searched or filtered",
"at org.elasticsearch.snapshots.sourceonly.SeqIdGeneratingFilterReader$SeqIdGeneratingSubReaderWrapper$1.terms(SeqIdGeneratingFilterReader.java:159) ~[?:?]",
"at org.apache.lucene.index.FilterLeafReader.terms(FilterLeafReader.java:366) ~[lucene-core-8.9.0.jar:8.9.0 05c8a6f0163fe4c330e93775e8e91f3ab66a3f80 - mayyasharipova - 2021-06-10 17:50:37]",
"at org.elasticsearch.common.lucene.uid.PerThreadIDVersionAndSeqNoLookup.(PerThreadIDVersionAndSeqNoLookup.java:61) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.lucene.uid.VersionsAndSeqNoResolver.getLookupState(VersionsAndSeqNoResolver.java:62) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.lucene.uid.VersionsAndSeqNoResolver.loadDocIdAndVersion(VersionsAndSeqNoResolver.java:122) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.engine.Engine.getFromSearcher(Engine.java:574) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.engine.ReadOnlyEngine.get(ReadOnlyEngine.java:261) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.shard.IndexShard.get(IndexShard.java:1019) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.get.ShardGetService.innerGet(ShardGetService.java:170) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.get.ShardGetService.get(ShardGetService.java:94) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.get.ShardGetService.get(ShardGetService.java:85) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.get.TransportGetAction.shardOperation(TransportGetAction.java:98) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.get.TransportGetAction.shardOperation(TransportGetAction.java:35) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.lambda$asyncShardOperation$0(TransportSingleShardAction.java:99) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.ActionRunnable.lambda$supply$0(ActionRunnable.java:47) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:62) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:732) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-7.14.0.jar:7.14.0]",
:

If you are okay, Can we have a quick call today sometime?

Currently we are able to get the kibana pod up but the problem is it doesn't have any saved objects.
Is there any way that we can get all the saved objectes from the previous backup files?

So you took a source only backup of the Kibana index right, not a full index snapshot?

No sorry, that is not an option here.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.