An in-depth breakdown of what I've tried and what I'm trying to achieve:
I'm looking for a better way to solve the following issue:
I have an alerting system and need to create a flat file which I can reference for nodes to exclude alerting on. The problem is that this file needs to be up to date whenever I reference it. My current pipeline creates individual entries in a flat file, but I think I need bulk entries so I can overwrite this entire flat file with current nodes in maintenance. Otherwise, I don't have a good cadence for keeping this list up to date.
For example, I query an endpoint for down nodes which I should not alert on. This list of down nodes which I need to exclude from alerting comes in the form:
{"node47-domain":{"name":"node47-domain"},"node1208-domain":{"name":"node1208-domain"},"node170-domain":{"name":"node170-domain"},"node2534-domain":{"name":"node2534-domain"},"node2584-domain":{"name":"node2584-domain"},"node563-domain":{"name":"node563-domain"}}
My current workflow is as follows:
- use the 'split filter' to split on the terminator:
,
thereby creating individual entries:
split {
terminator => ","
}
grok {
patterns_dir => ["/opt/logstash/patterns/down_nodes"]
match => { "message" => ["%{down_nodes}" ]}
tag_on_failure => "down_nodes_parsefailure"
}
- create a new field which combines the node name, with it's state: e.g.:
node47-domain,maintenance
node1208-domain,maintenance
mutate {
add_field => {"node_status" => "maintenance"}
add_field => { "icinga_down_nodes" => "%{icinga_down_node},%{node_status}" }
}
}
- use the 'file' output plugin to write this data to a flat file:
if [type] in ["icinga-down-nodes", "down-nodes"] {
if [icinga_down_nodes] {
file {
path => "/etc/logstash/translate/icinga_down_nodes.csv"
codec => line { format => "%{icinga_down_nodes}" }
}
}
}
- use the 'translate' filter to reference the flat file from my 'alerts' index, allowing me to filter out nodes tagged 'maintenance' prior to alerting
if [node_name] {
translate {
field => "node_name"
destination => "node_status"
dictionary_path=>"/etc/logstash/translate/icinga_down_nodes.csv"
fallback => "up"
}
}
The issue with this workflow is that the flat file keeps getting individual entries added to it. I have to update this file with current entries, but don't have a great way to do that since individual entries are constantly added to it -- a node may have been in maintenance but is now out of maintenance, yet still in this flat file..
What I'm looking for is a way to update the flat file in bulk with all entries. Is there a good way to use the 'split' filter in conjunction with, say, the 'multiline' filter to break that single line into multiple lines and then recreate that into one entry which I can overwrite a flat file with?