Hi,
I am having a multiline log format that consists of an XML document with a very deep message structure. All elements in the data structure can have a child element called Message containing a value attribute like so:
<ElementA>
<ElementB>
<ElementC>
<Message value="foo"/>
</ElementC>
</ElementB>
<ElementB>
<ElementC/>
</ElementB>
<ElementB>
<ElementC/>
<Message value="bar"/>
</ElementB>
<Message value="even more bar"/>
</ElementA>
This means the number of elements varies from message to message and can get quite large. What I would like to do is to build a Kibana table showing all message values with their respective number of occurences.
To do so I thought I can extract an array of all value attributes in my multiline message with a pattern like:
grok {
break_on_match => false
match => ["message","<Message value=\"%{DATA:msgText}"]
}
However grok only finds the first match for my pattern. If it is not possible to get all matches, is it possible to get the first n matches?
If this is not possible using logstash filtering can I get something like that done on the elasticsearch/kibana side?
Any help appreciated.
Bye,
Markus