How can i get separate email for distinct services in logs by single watcher

Hi,
I am new to ELK, I am having single log file where all service failure gets logged.
Can i create a single watcher to generate multiple alert for each service failure.
For example: If i have failure for serviceA, serviceB, serviceC in log then my watcher should generate 3 email having their corresponding failure details.

Kindly suggest

Currently, each alert can only sent out a single email. If you need to send out separate emails, the easiest part might be to create an alert per service.

Hope this helps!

--Alex

Thanks a lot for reply,
In our current architecture, we are sending emails in case of service failure with error details to Service now for incident creation.
We have around 400 services so i have to create 400 watcher alert to achieve the same...
Is there any way or work around to achieve the same.

Thanks
Abhishek

you could work around that by sending an alert via HTTP to logstash (using the logstash HTTP input), and then use logstash to send out several emails, by splitting the incoming event using the the split filter and then send emails using the email output

Hope this helps!

--Alex

Thanks,
I tried to do the work around solution as you suggested,

I updated the watcher and trying to send payload to logstash via webhook, but getting error like "path not found", Please find below the simulation result for the exceute action below. Kindly suggest. logstash http input plug in is installed.

{
"watch_id": "inlined",
"node": "zJbMhvCbS3aqrVrBBFHWJg",
"state": "executed",
"user": "elastic",
"status": {
"state": {
"active": true,
"timestamp": "2019-03-21T15:40:55.422Z"
},
"last_checked": "2019-03-21T15:40:55.423Z",
"last_met_condition": "2019-03-21T15:40:55.423Z",
"actions": {
"my_webhook": {
"ack": {
"timestamp": "2019-03-21T15:40:55.422Z",
"state": "awaits_successful_execution"
},
"last_execution": {
"timestamp": "2019-03-21T15:40:55.423Z",
"successful": false,
"reason": "received [404] status code"
}
}
},
"execution_state": "executed",
"version": -1
},
"trigger_event": {
"type": "manual",
"triggered_time": "2019-03-21T15:40:55.423Z",
"manual": {
"schedule": {
"scheduled_time": "2019-03-21T15:40:55.423Z"
}
}
},
"input": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"pq-icelog*"
],
"types": ,
"body": {
"size": 20,
"query": {
"bool": {
"must": [
{
"match": {
"eventDetails.eventType": "ERROR"
}
},
{
"range": {
"@timestamp": {
"gte": "now-2d",
"lt": "now"
}
}
}
]
}
},
"aggs": {
"group_by_serviceName": {
"terms": {
"field": "interfaceHeader.className.keyword",
"size": 5
},
"aggs": {
"group_by_logLevel": {
"terms": {
"field": "eventDetails.eventType.keyword",
"size": 5
},
"aggs": {
"get_latest": {
"terms": {
"field": "@timestamp",
"size": 1,
"order": {
"_key": "desc"
}
}
}
}
}
}
}
}
}
}
}
},
"condition": {
"compare": {
"ctx.payload.hits.total": {
"gte": 0
}
}
},
"metadata": {
"name": "Logstash_Webhook_Alert",
"xpack": {
"type": "json"
}
},
"result": {
"execution_time": "2019-03-21T15:40:55.423Z",
"execution_duration": 26,
"input": {
"type": "search",
"status": "success",
"payload": {
"_shards": {
"total": 195,
"failed": 0,
"successful": 195,
"skipped": 180
},
"hits": {
"hits": ,
"total": 0,
"max_score": null
},
"took": 19,
"timed_out": false,
"aggregations": {
"group_by_serviceName": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets":
}
}
},
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"pq-icelog*"
],
"types": ,
"body": {
"size": 20,
"query": {
"bool": {
"must": [
{
"match": {
"eventDetails.eventType": "ERROR"
}
},
{
"range": {
"@timestamp": {
"gte": "now-2d",
"lt": "now"
}
}
}
]
}
},
"aggs": {
"group_by_serviceName": {
"terms": {
"field": "interfaceHeader.className.keyword",
"size": 5
},
"aggs": {
"group_by_logLevel": {
"terms": {
"field": "eventDetails.eventType.keyword",
"size": 5
},
"aggs": {
"get_latest": {
"terms": {
"field": "@timestamp",
"size": 1,
"order": {
"_key": "desc"
}
}
}
}
}
}
}
}
}
}
}
},
"condition": {
"type": "compare",
"status": "success",
"met": true,
"compare": {
"resolved_values": {
"ctx.payload.hits.total": 0
}
}
},
"actions": [
{
"id": "my_webhook",
"type": "webhook",
"status": "failure",
"transform": {
"type": "script",
"status": "success",
"payload": {
"hits": ,
"total": 0,
"max_score": null
}
},
"reason": "received [404] status code",
"webhook": {
"request": {
"host": "10.132.1.2",
"port": 9615,
"scheme": "http",
"method": "post",
"path": "testAlert.json",
"headers": {
"Content-type": "application/json"
},
"body": "{hits=, total=0, max_score=null}"
},
"response": {
"status": 404,
"headers": {
"content-length": [
"71"
],
"content-type": [
"application/json"
],
"x-content-type-options": [
"nosniff"
],
"x-cascade": [
"pass"
]
},
"body": "{"path":"/testAlert.json","status":404,"error":{"message":"Not Found"}}"
}
}
}
]
},
"messages":
}

Addition to the above, when i executed the below i received error
bin/logstash -e "input { http { } } output { stdout { codec => rubydebug} }"

Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2019-03-22T10:50:00,562][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
ERROR: Configuration reloading can't be used with 'config.string' (-e).
usage:
bin/logstash -f CONFIG_PATH [-t] [-r] [-w COUNT] [-l LOG]
bin/logstash --modules MODULE_NAME [-M "MODULE_NAME.var.PLUGIN_TYPE.PLUGIN_NAME.VARIABLE_NAME=VALUE"] [-t] [-w COUNT] [-l LOG]
bin/logstash -e CONFIG_STR [-t] [--log.level fatal|error|warn|info|debug|trace] [-w COUNT] [-l LOG]
bin/logstash -i SHELL [--log.level fatal|error|warn|info|debug|trace]
bin/logstash -V [--log.level fatal|error|warn|info|debug|trace]
bin/logstash --help
[2019-03-22T10:50:00,586][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

the default port is 8080 according to the logstash http input documentation at https://www.elastic.co/guide/en/logstash/6.6/plugins-inputs-http.html

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.