"Migrating .kibana_3 saved objects to .kibana_4" - 7.9.3 upgrade to 7.10

Deleting the .kibana_4 and restarting the kibana instance leaves me in a permanent death loop. This will run for 3+ hours and do nothing. Nothing happens checking the journlctl logs this is what always shows up "Migrating .kibana_3 saved objects to .kibana_4". This is a stand alone node and has failed every updated since version 7.6.0. Forcing me to have to delete all the kibana index which is clearly destructive!

Even if you export the saved objects prior to upgrade the import will fail every time...

Any way to recover yet again without having to rebuild everything? I don't have the 60+ hours to redo them. Rolling back to 7.9.3 has failed as well.

If importing the objects from a previous version fails it probably means that Kibana is unable to migrate these objects. The most likely cause of this is a corrupt document that was manually edited in some form. This will usually only affect a single document or a hand full of documents so it shouldn't be necessary to delete everything.

Can you share your complete Kibana logs for the upgrade failure?

For the import part on 7.10 I did see that an index remap was added the the gui side which is 1000% easier. Where for some odd reason auditbeat-* is not auditbeat-* on import...

That is the least of my concerns as following the Git repo's all the way back to early version in 6.x the same upgrade failure has existed.

"Migrating .kibana_3 saved objects to .kibana_4" on kibana startup ends with total failure often times for upgrade paths.

You are forced to do 2 things. Roll back which is a 50/50 chance without a full restore or delete the index which then deletes everything. The delete is what causes the import failures as the index names from before are gone. There is no manual editing being done.

This only happens on stand alone node mind you. I haven't had a failure on a cluster.

Which logs would you like. They all get stuck at the migrating one so it's not really helpful.

It would be useful to see the logs from the first time an upgraded node is started. When Kibana first attempts an upgrade it will add a migration lock which will prevent further instances from attempting a migration. This lock also prevents the node that created it from attempting the migration a second time after being restarted. So the logs after Kibana fails the first time usually doesn't contain anything useful, but the first failure will have some kind of ERROR/FATAL log that will shed some light on the root cause.

You can read more about this process, possible causes of failure as well as ways to resolve it: https://www.elastic.co/guide/en/kibana/7.9/upgrade-migrations.html

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.