Good afternoon,
I'm trying to create a Watcher for cluster alerting in Slack. I'm using the Elasticsearch cluster state watcher included with X-Pack. For testing, i'm verifying if the behavior works like the built-in one. My needs are to alert on cluster outage, and message when the cluster is healthy again within Slack.
However, I'm receiving transform errors when trying to simulate the Watcher. When i'm using a reallife scenario (i.e shutting down one node) I keep getting mails the cluster is yellow. I do not receive any e-mail when the cluster is green again.
`{
"trigger": {
"schedule": {
"interval": "1m"
}
},
"input": {
"chain": {
"inputs": [
{
"check": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
".monitoring-es-*"
],
"types": [],
"body": {
"size": 1,
"sort": [
{
"timestamp": {
"order": "desc"
}
}
],
"_source": [
"cluster_state.status"
],
"query": {
"bool": {
"filter": [
{
"term": {
"cluster_uuid": "{{ctx.metadata.xpack.cluster_uuid}}"
}
},
{
"bool": {
"should": [
{
"term": {
"_type": "cluster_state"
}
},
{
"term": {
"type": "cluster_stats"
}
}
]
}
}
]
}
}
}
}
}
}
},
{
"alert": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
".monitoring-alerts-6"
],
"types": [],
"body": {
"size": 1,
"terminate_after": 1,
"query": {
"bool": {
"filter": {
"term": {
"_id": "{{ctx.watch_id}}"
}
}
}
},
"sort": [
{
"timestamp": {
"order": "desc"
}
}
]
}
}
}
}
},
{
"condition": {
"script": {
"source": "ctx.vars.fails_check = ctx.payload.check.hits.total != 0 && ctx.payload.check.hits.hits[0]._source.cluster_state.status != 'green';ctx.vars.not_resolved = ctx.payload.alert.hits.total == 1 && ctx.payload.alert.hits.hits[0]._source.resolved_timestamp == null;return ctx.vars.fails_check || ctx.vars.not_resolved",
"lang": "painless"
}
},
"transform": {
"script": {
"source": "ctx.vars.email_recipient = (ctx.payload.kibana_settings.hits.total > 0) ? ctx.payload.kibana_settings.hits.hits[0]._source.kibana_settings.xpack.default_admin_email : null;ctx.vars.is_new = ctx.vars.fails_check && !ctx.vars.not_resolved;ctx.vars.is_resolved = !ctx.vars.fails_check && ctx.vars.not_resolved;def state = ctx.payload.check.hits.hits[0]._source.cluster_state.status;if (ctx.vars.not_resolved){ctx.payload = ctx.payload.alert.hits.hits[0]._source;if (ctx.vars.fails_check == false) {ctx.payload.resolved_timestamp = ctx.execution_time;}} else {ctx.payload = ['timestamp': ctx.execution_time, 'metadata': ctx.metadata.xpack];}if (ctx.vars.fails_check) {ctx.payload.prefix = 'Elasticsearch cluster status is ' + state + '.';if (state == 'red') {ctx.payload.message = 'Allocate missing primary shards and replica shards.';ctx.payload.metadata.severity = 2100;} else {ctx.payload.message = 'Allocate missing replica shards.';ctx.payload.metadata.severity = 1100;}}ctx.vars.state = state.toUpperCase();ctx.payload.update_timestamp = ctx.execution_time;return ctx.payload;",
"lang": "painless"
}
},
"actions": {
"add_to_alerts_index": {
"index": {
"index": ".monitoring-alerts-6",
"doc_type": "doc",
"doc_id": "i-BZfpJDT2SUTGAk1NDq_g_elasticsearch_cluster_status"
}
},
}
`
I've removed some code due to post message limit. However the watcher is retrievable through 'get .watcher/_search'
I'm receiving the following error:
transform": { "type": "script", "status": "failure", "reason": "runtime error", "error": { "root_cause": [ { "type": "script_exception", "reason": "runtime error", "script_stack": [ "ctx.vars.is_new = ctx.vars.fails_check && !ctx.vars.not_resolved;", " ^---- HERE"
Thank you in advanced