Kibana Uptime monitors broken after migrating to another cluster


I have recently migrated to another cluster, but I have restored the snapshot after the cluster creation.
Because of this, I think some encryption keys for the encrypted saved objects have been changed. This causes an issue for me, because my uptime monitors no longer work. I would delete them and simply recreate them, but even deletion is met with an error.

From kibana log, this is the error:

[kibana.log][ERROR] Unable to decrypt attribute "secrets"
Error: Unable to decrypt attribute "secrets"
    at EncryptedSavedObjectsService.attributesToDecryptIterator (/usr/share/kibana/x-pack/plugins/encrypted_saved_objects/server/crypto/encrypted_saved_objects_service.js:389:15)
    at attributesToDecryptIterator.throw (<anonymous>)
    at EncryptedSavedObjectsService.decryptAttributes (/usr/share/kibana/x-pack/plugins/encrypted_saved_objects/server/crypto/encrypted_saved_objects_service.js:302:23)
    at Object.getDecryptedAsInternalUser (/usr/share/kibana/x-pack/plugins/encrypted_saved_objects/server/saved_objects/index.js:59:23)

I am currently using Kibana 8.6, and I was unable to find any resource pointing to a way to delete these corrupted uptime monitors.

Has anyone ever encountered this?


Hello @George_ML , Welcome to Elastic Community

Can you please tell bit more details. Did you migrate to same version of clusters? Or did you upgrade your kibana?

We implemented a change a while ago that we ignore decryption errors if you are deleting a monitor. This is why it's kinda strange that you are unable to delete monitors.

Also are you using private locations or elastic cloud to run monitors. Based on that info. i might be able to help you clean up your monitors.


Hello @George_ML ,

i just checked that PR which ignore decryption errors actually went into 8.7 release. So you can upgrade

I would still like to know more about why you faced this error in the first place.



The clusters both have the same version, however, while moving from an AWS marketplace to the Azure marketplace, I was unable to use the same organisation.
Because of that, I had to create a new cluster in Azure, and then set up a new snapshot repository in S3 that would be shared across both clusters. This way I was able to create a snapshot in one cluster, and restore in another.
Because of that, some internal kibana indexes had some issues while restoring, and I had to close them in order to get the snapshot restore working. I suspect it's those kibana indices that caused this issue.

As for upgrading, unfortunately I have a legacy project that doesn't work well with wasm, and because the "@elastic/elasticsearch" package is using undici since v8, I am unable to upgrade past 7.17.0. It seems that version 8.6 of elastic is the latest version compatible v7.17 of the npm package, as I was getting some errors when sending logs.

So for now, I am forced to stay on v8.6.2.

And to answer your questions, I did not have any private locations, but my deployment is set up in Elastic Cloud, and I would assume the monitors are on elastic cloud as well.

Thank you

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.