Unable to send email using watcher on Xpack 6.2.1

Hi, we are seeing an error through the essential SMTP configuration and other email action settings are enabled using X-Pack 6.2.1. Below is the facing error message,

Can someone please assist here?

Regards
Mahesh

Hey,

I assume you did configure an email account? If so, can you possibly use the execute watch API with this particular and paste the response in here?

This will ease debugging a lot. Thanks!

--Alex

Thanks Alexander. I could see below response coming from the ELK host

[root@hostname ~]# POST _xpack/watcher/watch/6dfb7d8a-e93f-4513-b508-76f3ee268861/_execute
Please enter content (application/x-www-form-urlencoded) to be POSTed:

[root@hostname ~]# POST _xpack/watcher/watch/6dfb7d8a-e93f-4513-b508-76f3ee268861/_execute
Please enter content (application/x-www-form-urlencoded) to be POSTed:

This is from elk test email ^Z [1]+ Stopped POST _xpack/watcher/watch/6dfb7d8a-e93f-4513-b508-76f3ee268861/_execute

Hi Alex, Did you get a chance to look in to this output?

sorry, forgot to answer. This output does not help, my examples were meant to be pasted into the kibana devtools, where you just pasted them into a linux console and thus got weird responses.

I am interested in the JSON responses of the above calls, when executed in kibana devtools.

Below is the response from dev tools,

{
"_id": "6dfb7d8a-e93f-4513-b508-76f3ee268861_28fd79ac-47da-4323-bfcc-a17a25936dc2-2018-05-07T11:40:32.796Z",
"watch_record": {
"watch_id": "6dfb7d8a-e93f-4513-b508-76f3ee268861",
"node": "0vYknH82RQmxnCMeQx7_Dg",
"state": "execution_not_needed",
"status": {
"state": {
"active": true,
"timestamp": "2018-05-07T11:35:20.971Z"
},
"last_checked": "2018-05-07T11:40:32.796Z",
"actions": {
"email_1": {
"ack": {
"timestamp": "2018-05-04T12:13:01.917Z",
"state": "awaits_successful_execution"
}
}
},
"execution_state": "execution_not_needed",
"version": 5976
},
"trigger_event": {
"type": "manual",
"triggered_time": "2018-05-07T11:40:32.796Z",
"manual": {
"schedule": {
"scheduled_time": "2018-05-07T11:40:32.796Z"
}
}
},
"input": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"metricbeat-"
],
"types": [],
"body": {
"size": 0,
"query": {
"bool": {
"filter": {
"range": {
"@timestamp": {
"gte": "{{ctx.trigger.scheduled_time}}||-30m",
"lte": "{{ctx.trigger.scheduled_time}}",
"format": "strict_date_optional_time||epoch_millis"
}
}
}
}
},
"aggs": {
"metricAgg": {
"max": {
"field": "system.cpu.total.pct"
}
}
}
}
}
}
},
"condition": {
"script": {
"source": "if (ctx.payload.aggregations.metricAgg.value > params.threshold) { return true; } return false;",
"lang": "painless",
"params": {
"threshold": 1000
}
}
},
"metadata": {
"name": "TEST_threshold",
"watcherui": {
"trigger_interval_unit": "m",
"agg_type": "max",
"time_field": "@timestamp",
"trigger_interval_size": 10,
"term_size": 5,
"time_window_unit": "m",
"threshold_comparator": ">",
"term_field": null,
"index": [
"metricbeat-
"
],
"time_window_size": 30,
"threshold": 1000,
"agg_field": "system.cpu.total.pct"
},
"xpack": {
"type": "threshold"
}
},
"result": {
"execution_time": "2018-05-07T11:40:32.796Z",
"execution_duration": 31,
"input": {
"type": "search",
"status": "success",
"payload": {
"_shards": {
"total": 310,
"failed": 0,
"successful": 310,
"skipped": 305
},
"hits": {
"hits": [],
"total": 240279,
"max_score": 0
},
"took": 30,
"timed_out": false,
"aggregations": {
"metricAgg": {
"value": 6.705599784851074
}
}
},
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"metricbeat-*"
],
"types": [],
"body": {
"size": 0,
"query": {
"bool": {
"filter": {
"range": {
"@timestamp": {
"gte": "2018-05-07T11:40:32.796Z||-30m",
"lte": "2018-05-07T11:40:32.796Z",
"format": "strict_date_optional_time||epoch_millis"
}
}
}
}
},
"aggs": {
"metricAgg": {
"max": {
"field": "system.cpu.total.pct"
}
}
}
}
}
}
},
"condition": {
"type": "script",
"status": "success",
"met": false
},
"actions": []
},
"messages": []
}
}

please take the time and format your messages properly into code blocks. This leaves them pretty much unreadable (humans are pretty mediocre JSON parsers...)

However, after looking it is expected that there is no action triggered. you can see result.condition.met is set to false, which means you condition did not turn true.

If you further check out the JSON you see that your search resposne contains this value for the aggregation

"value": 6.705599784851074

Your condition however compares, if that value is greater than 1000... which is false, and thus no actions are fired.

Many thanks, Alex. I have changed the condition and able to see below output in the results set:

{
"_id": "6dfb7d8a-e93f-4513-b508-76f3ee268861_151747ee-806e-45e7-bfc2-6d4b946c072f-2018-05-07T17:41:03.505Z",
"watch_record": {
"watch_id": "6dfb7d8a-e93f-4513-b508-76f3ee268861",
"node": "0vYknH82RQmxnCMeQx7_Dg",
"state": "not_executed_already_queued",
"trigger_event": {
"type": "manual",
"triggered_time": "2018-05-07T17:41:03.505Z",
"manual": {
"schedule": {
"scheduled_time": "2018-05-07T17:41:03.505Z"
}
}
},
"messages": [
"Watch is already queued in thread pool"
]
}
}

But, I still do see the "internal server error" on the Test email action :frowning:

The error message above is different. It means, that this watch is already waiting for execution in the thread pool queue and thus this execution has been cancelled early.

Either you wait and retry or you use the execute watch API and provide the whole watch.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.