Question About Migrating Watchers From 5.3 to 6.8

Hey ES crew,

I'm nearing the end of my project to migrate our on-prem 5.3 ES cluster to a 6.8 AWS ES hosted cluster. One of the final things I need to accomplish is migrating the watchers stored on our on-prem instance into the AWS ES instance and I'm having some trouble.

I've read migration of watchers between kibana instances can be accomplished by using the snapshot / restore API to take a snapshot of the ".watches" index, deleting the ".watches" index on the target cluster (if it exists), and then calling the restore API to restore the migrated index to the target cluster. I've tried this method with my cluster and unfortunately zero watchers appeared in the kibana UI of my 6.8 instance post snapshot restoration.

I realize this could be to a change in the index structure between 5.3 and 6.8 and I believe there's an X-Pack API you can call to assist in the index update process prior to an upgrade. Unfortunately, AWS ES has the XPack API disabled as far as I know, so I am unable to call the Migration Upgrade API and I haven't had any luck in finding a post that talks about manually re-indexing the data for compatibility similar to this post in regards to the ".kibana" index.

There are only two other solutions I can think of to overcome this scenario. The first is that I manually recreate every watcher we have in the new amazon es instance (I'd like to avoid this). The second option is potentially standing up a quick ES 6.8 instance on prem. Calling the snapshot API from the 5.3 instance, then I could call the Migration Upgrade API on the snapshot from the on-prem 6.8 instance, and finally restoring the re-indexed snapshot into AWS ES.

What other options do I have when it comes to migrating watchers between Kibana instances? Especially major versions like 5.3 to 6.8? Sorry for the basic questions. My end goal here is to restore the watchers from my 5.3 cluster into my 6.8 cluster and have the alerts appear in the Kibana alerting interface for modification as AWS ES uses Amazon SNS for notification. If this is not possible please let me know.

References:

Thanks!

Watcher is not available on AWS ES unfortunately, none of X-Pack is.

You can migrate to our Elasticsearch Service - https://www.elastic.co/products/elasticsearch/service - which has all of the features bundled in :slight_smile:

3 Likes

Also worth noting is that AWS also block access to certain API endpoints, including /_cluster/settings and /_cluster/pending_tasks, which limits the control you have over managing your cluster.

As @warkolm mentioned, you can migrate to our Elasticsearch Service, and even continue to be billed through your existing AWS account if desired: https://aws.amazon.com/marketplace/pp/B01N6YCISK

This page gives you a comparison table (last updated July 30, 2019) between the two: https://www.elastic.co/aws-elasticsearch-service

2 Likes

Thank you both for the information provided. When I originally started this project I had studied the chart linked and decided that the AWS ES instance should meet our needs. I realize the Xpack features aren't supported by AWS ES but it appears to me that AWS ES still supports alerting as described here, plus the alerting UI exists in AWS ES as well. What am I misunderstanding here? Is AWS ES Alerting not analgous to watchers?

For what it's worth I tried my best to pitch the full features ES hosting to my boss but was shot down due to hosting environment concerns. After re-reading the comparison chart now I realize that yalls solution is hosted in AWS as well.

I made have made a grave mistake :frowning: This is why you don't give the new guy with zero elastic experience the task of migrating production clusters :expressionless:

No. You would have to recreate all of your watches, as it would not read the .watches index. Nor would it behave exactly the same as what you’ve already created.

Thanks for the info Aaron. Could you elaborate further on what you mean by "Nor would it behave exactly the same as what you've already created". I was able to export all of our watchers in JSON format using a scroll query. If I were to re-create the watcher referenced below exactly shouldn't it stil behave as it did on-prem?

      {
        "_index": ".watches",
        "_type": "watch",
        "_id": "Generic Name",
        "_score": 1,
        "_source": {
          "trigger": {
            "schedule": {
              "daily": {
                "at": [
                  "5:00",
                  "17:00"
                ]
              }
            }
          },
          "input": {
            "search": {
              "request": {
                "search_type": "query_then_fetch",
                "indices": [
                  "responses-*"
                ],
                "types": [],
                "body": {
                  "query": {
                    "bool": {
                      "must": [
                        {
                          "query_string": {
                            "query": """ChannelPartner:"Generic Partner""""
                          }
                        },
                        {
                          "query_string": {
                            "query": """Lender:"Generic Lender""""
                          }
                        },
                        {
                          "query_string": {
                            "query": """request: "Redacted""""
                          }
                        },
                        {
                          "range": {
                            "@timestamp": {
                              "gte": "now-24h",
                              "lte": "now"
                            }
                          }
                        }
                      ]
                    }
                  }
                }
              }
            }
          },
          "condition": {
            "compare": {
              "ctx.payload.hits.total": {
                "lt": 1
              }
            }
          },
          "actions": {
            "send_email": {
              "email": {
                "profile": "standard",
                "to": [
                  "Removed"
                ],
                "subject": "Watcher Alert: {{ctx.watch_id}} had {{ctx.payload.hits.total}} hits in last 24 hours",
                "body": {
                  "text": "See Kibana data over the last 12 hours here: Removed
                }
              }
            }
          },
          "_status": {
            "state": {
              "active": true,
              "timestamp": "2018-02-22T17:34:09.630Z"
            },
            "actions": {
              "send_email": {
                "ack": {
                  "timestamp": "2018-02-22T17:34:09.630Z",
                  "state": "awaits_successful_execution"
                },
                "last_execution": {
                  "reason": "GeneralScriptException[Failed to compile inline script [See Kibana data over the last 12 hours here: https://tinyurl.com/mgwk9ok] using lang [mustache]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting]; ",
                  "timestamp": "2019-08-09T05:00:00.232Z",
                  "successful": false
                }
              }
            },
            "last_checked": "2019-08-09T05:00:00.232Z",
            "last_met_condition": "2019-08-09T05:00:00.232Z"
          }
        }
      },

AWS Elasticsearch cannot use Watcher, so their alerting is their own creation. You cannot import Elasticsearch watches into it, which means you would have to create your own using their tool.

I know this sounds like a pitch, and it is, but you could just restore into the Elastic Cloud and keep what’s already working upgraded, but still working. I don’t know what that entails for you, but it’s a viable option that comes with many other benefits included, as mentioned by others already.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.