Unable to get back into Watcher section of Kibana

I saved a change to an advanced watch (trying to add a Slack action) and now have a

Watcher: Error 400 Bad Request: json argument must contain an actionJson.slack.message.to property

error every time I click on Watcher in the Kibana Management page to get back to edit the watch

So I'm kind of stuck ... how can I fix this? Is there any way to edit or just delete watches from outside of Kibana?

I just hit this bug again after trying to use text="" in my watch.

Watcher: Error 400 Bad Request: json argument must contain an actionJson.slack.message.text property

The only way I know to get around this is to delete my elasticsearch data folder and set everything up again (index patterns, visualizations, dashboards, watches.)

Is there another way?

Hey Rob,

I suppose this is just a duplicate of Watcher Heartbeat monitor.status query help or am I misreading it?

--Alex

Hi Alex,

That one was another issue in which I referenced this one. I'll reply to your comment from there here.

I think you may be running into this kibana issue: Invalid watches should not break the entire UI · Issue #18532 · elastic/kibana · GitHub

First, can you share the full watch and your elasticsearch.yml slack configuration? You can do this using the dev tools console and just run the Get Watch API

Also, which version of Elasticsearch and Kibana are you running on?

I suppose that it might be easier to use the dev tools for editing watches or adding a to parameter in the dev tools, so that your watch UI should be back to working.

I'll try to get a fix in, once all the information is provided.

It does sound like the Kibana issue you linked to.

I'm using version 6.3.0.

Here's my elasticsearch.yml (with some characters replace with '#):

bootstrap.memory_lock: false
cluster.name: elasticsearch
http.port: 9200
network.host: ##.##.##.##
node.data: true
node.ingest: true
node.master: true
node.max_local_storage_nodes: 1
node.name: ###
path.data: C:\ProgramData\Elastic\Elasticsearch\data
path.logs: C:\ProgramData\Elastic\Elasticsearch\logs
transport.tcp.port: 9300
xpack.license.self_generated.type: trial
xpack.security.enabled: false
xpack.notification.email.account:
    exchange_account:
        profile: outlook
        email_defaults:
            from: ##@###.##
        smtp:
            auth: false
            starttls.enable: false
            host: ##.###.##
xpack.notification.slack:
  account:
    monitoring:
      url: https://hooks.slack.com/services/###
      message_defaults:
        from: x-pack
        to: "@###"
        icon: https://www.yc.edu/webtools/apps/alertyc/alerticon.jpg
        attachment:
          fallback: "X-Pack Notification"
          color: "#36a64f"
          title: "X-Pack Notification"
          title_link: "https://www.elastic.co/guide/en/x-pack/current/index.html"
          text: "One of your watches generated this notification."
          mrkdwn_in: "pretext, text"

Here's what GET _xpack/watcher/watch/<watch_id> from the Dev Tools console is giving me (with some characters replace with '#):

{
  "found": true,
  "_id": "ping_adv_id",
  "_version": 27,
  "status": {
    "state": {
      "active": true,
      "timestamp": "2018-07-17T15:47:01.932Z"
    },
    "last_checked": "2018-07-17T22:18:41.232Z",
    "last_met_condition": "2018-07-17T22:18:41.232Z",
    "actions": {
      "send_email": {
        "ack": {
          "timestamp": "2018-07-17T16:02:08.323Z",
          "state": "ackable"
        },
        "last_execution": {
          "timestamp": "2018-07-17T22:03:40.116Z",
          "successful": true
        },
        "last_successful_execution": {
          "timestamp": "2018-07-17T22:03:40.116Z",
          "successful": true
        },
        "last_throttle": {
          "timestamp": "2018-07-17T22:18:41.232Z",
          "reason": "throttling interval is set to [1h] but time elapsed since last execution is [15m]"
        }
      },
      "notify-slack": {
        "ack": {
          "timestamp": "2018-07-17T16:02:08.323Z",
          "state": "ackable"
        },
        "last_execution": {
          "timestamp": "2018-07-17T22:18:41.232Z",
          "successful": true
        },
        "last_successful_execution": {
          "timestamp": "2018-07-17T22:18:41.232Z",
          "successful": true
        }
      }
    },
    "execution_state": "throttled",
    "version": 27
  },
  "watch": {
    "trigger": {
      "schedule": {
        "interval": "15m"
      }
    },
    "input": {
      "search": {
        "request": {
          "search_type": "query_then_fetch",
          "indices": [
            "heartbeat-*"
          ],
          "types": [],
          "body": {
            "size": 0,
            "query": {
              "bool": {
                "must": [
                  {
                    "term": {
                      "monitor.status": {
                        "value": "down"
                      }
                    }
                  },
                  {
                    "range": {
                      "@timestamp": {
                        "from": "now-15m"
                      }
                    }
                  }
                ]
              }
            },
            "aggregations": {
              "by_monitors": {
                "terms": {
                  "field": "monitor.host",
                  "size": 100,
                  "min_doc_count": 1
                }
              }
            }
          }
        }
      }
    },
    "condition": {
      "compare": {
        "ctx.payload.hits.total": {
          "gt": 0
        }
      }
    },
    "throttle_period_in_millis": 3600000,
    "actions": {
      "send_email": {
        "email": {
          "profile": "standard",
          "to": [
            "###@###.##"
          ],
          "subject": "Unresponsive test systems",
          "body": {
            "text": "{{ctx.payload.hits.total}} unresponsive hosts: {{#ctx.payload.aggregations.by_monitors.buckets}}{{key}} {{get_latest.buckets.0.group_by_event_name.buckets.0.key}} {{/ctx.payload.aggregations.by_monitors.buckets}}",
            "html": "{{ctx.payload.hits.total}} system(s) not responding to pings:<P>{{#ctx.payload.aggregations.by_monitors.buckets}}{{key}}<BR>{{/ctx.payload.aggregations.by_monitors.buckets}}"
          }
        }
      },
      "notify-slack": {
        "throttle_period_in_millis": 900000,
        "slack": {
          "account": "monitoring",
          "message": {
            "from": "Automation Systems Watch",
            "text": """
{{ctx.payload.hits.total}} system(s) not responding to pings:
{{#ctx.payload.aggregations.by_monitors.buckets}}{{key}}

{{/ctx.payload.aggregations.by_monitors.buckets}}
""",
            "icon": ":oncoming_automobile:"
          }
        }
      }
    },
    "metadata": {
      "name": "Ping test systems",
      "xpack": {
        "type": "json"
      }
    }
  }
}

I'll try to figure out how to use the Watch API in the Dev Tools console to edit the watch since I've now hit this problem a third time and don't want to have to set everything up yet again.

The error message I'm getting right now from Kibana is:
"Watcher: Error 400 Bad Request: json argument must contain an actionJson.slack.message.to property"

so, as a workaround you can configure the to property in the watch slack action instead of configuring in the account, this should mitigate the issue and the watcher UI should function again.

Ok, thanks Alex. I was able to use the Watch API from the Dev Tools console to add the to property to the watch and the Watcher UI is accessible again.

1 Like

I opened https://github.com/elastic/kibana/issues/20970 in kibana with a proper description and reproduction in order to get this fixed

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.