Break a single line into one multiline event

I need to break down a single event into one multi-line event.

e.g.
single one-line event:

{"node47-domain":{"name":"node47-domain"},"node1208-domain":{"name":"node1208-domain"},"node170-domain":{"name":"node170-domain"},"node2534-domain":{"name":"node2534-domain"},"node2584-domain":{"name":"node2584-domain"},"node563-domain":{"name":"node563-domain"}}

single multi-line event:

"node47-domain"
"node1208-domain"
"node1259-domain"
"node2534-domain"
"node2584-domain"

This needs to be one event as I'm overwriting a flat file which I reference via another index. I can easily break the single line event into multiple single-line events, but that doesn't comply with needing to overwrite the flat file.

An in-depth break down of what I've attempted can be found in the first reply.

An in-depth breakdown of what I've tried and what I'm trying to achieve:

I'm looking for a better way to solve the following issue:

I have an alerting system and need to create a flat file which I can reference for nodes to exclude alerting on. The problem is that this file needs to be up to date whenever I reference it. My current pipeline creates individual entries in a flat file, but I think I need bulk entries so I can overwrite this entire flat file with current nodes in maintenance. Otherwise, I don't have a good cadence for keeping this list up to date.

For example, I query an endpoint for down nodes which I should not alert on. This list of down nodes which I need to exclude from alerting comes in the form:
{"node47-domain":{"name":"node47-domain"},"node1208-domain":{"name":"node1208-domain"},"node170-domain":{"name":"node170-domain"},"node2534-domain":{"name":"node2534-domain"},"node2584-domain":{"name":"node2584-domain"},"node563-domain":{"name":"node563-domain"}}

My current workflow is as follows:

  1. use the 'split filter' to split on the terminator: , thereby creating individual entries:
    split {
      terminator => ","
    }
    grok {
      patterns_dir => ["/opt/logstash/patterns/down_nodes"]
      match => { "message" => ["%{down_nodes}" ]}
      tag_on_failure => "down_nodes_parsefailure"
    }
  1. create a new field which combines the node name, with it's state: e.g.:
    node47-domain,maintenance
    node1208-domain,maintenance
      mutate {
        add_field => {"node_status" => "maintenance"}
        add_field => { "icinga_down_nodes" => "%{icinga_down_node},%{node_status}"  }
      }
    }
  1. use the 'file' output plugin to write this data to a flat file:
if [type] in ["icinga-down-nodes", "down-nodes"] {
    if [icinga_down_nodes] {
      file {
        path => "/etc/logstash/translate/icinga_down_nodes.csv"
        codec => line { format => "%{icinga_down_nodes}" }
      }
    }
  }
  1. use the 'translate' filter to reference the flat file from my 'alerts' index, allowing me to filter out nodes tagged 'maintenance' prior to alerting
    if [node_name] {
      translate {
        field => "node_name"
        destination => "node_status"
        dictionary_path=>"/etc/logstash/translate/icinga_down_nodes.csv"
        fallback => "up"
      }
    }

The issue with this workflow is that the flat file keeps getting individual entries added to it. I have to update this file with current entries, but don't have a great way to do that since individual entries are constantly added to it -- a node may have been in maintenance but is now out of maintenance, yet still in this flat file..

What I'm looking for is a way to update the flat file in bulk with all entries. Is there a good way to use the 'split' filter in conjunction with, say, the 'multiline' filter to break that single line into multiple lines and then recreate that into one entry which I can overwrite a flat file with?

Your main problem is that the file output is append-only. Perhaps you can use an exec output to make sure the file it overwritten each time.

I believe my main problem is that I need this to be append-only, since I'm sending multiple valid events to this file.

e.g. Five nodes may be down and thus five entries need to be visible in that file. What it sounds like your suggesting would only allow for one entry in the file at the time, thus four nodes would not be seen in their true state of 'maintenance'.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.