Elastic Cluster State Restore never really works for .kibana-x.x.x index (workarounds need to be applied)

Hi,

when I try to restore cluster state of a cluster with different version (for example right now I want to restore 7.13.2 to 7.16.2) from a snapshot the .kibana instance always crashes with the error:

"type":"log","@timestamp":"2021-12-20T16:31:31+00:00","tags":["warning","environment"],"pid":1,"message":"Detected an unhandled Promise rejection: ResponseError: Saved object index alias [.kibana_7.16.2] not found: index_not_found_exception: [index_not_found_exception] Reason: no such index [.kibana_7.16.2] and [require_alias] request flag is [true] and [.kibana_7.16.2] is not an alias

The problem appears to be:

  1. Elastic restored the .kibana_7.13.2_001 index along with the aliases .kibana and .kibana_7.13_2, even though I have selected "Do not restore alias" in the Kibana Restore UI.
  2. Elastic removes all alias from .kibana_7.16.2 index, which was created during the very first start up of kibana

The workaround for this appears to be:

  1. Add alias .kibana_7.16.2 and .kibana to the restored .kibana_7.13.2_001 index to make kibana run again

  2. I cannot reindex the restored kibana index to the new kibana index because of field type missmatch (like "cannot write string in long" and such)

But now when you restart kibana it will not start up with some errors telling me that the index .kibana_7.16.2 could not be found. After I manually now create .kibana_7.16.2 index kibana starts up successfully again.

This seems to be very hacky and I do not know if these workarounds have some future impacts.

Can you please advise if this is supported and how should the restore procedure of the cluster state to a cluster with a newer version (including kibana internal index) exactly look like?

Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.