Suppose I am getting a log message through logstash, with some custom format of the form FIELD1=VALUE1 etc. I am still quite a bit in the dark here, but I've been playing a little with visualizations. What I would like to do is obtain a unique count of, say, values of FIELD1. I think this is called an aggregation what I'm trying to do.
My question is: do I need to grok these fields at the logstash stage, and put all values in separate fields? Or can the field be parsed at search-time, Splunk-style? Or is there another recommended solution? I'm looking for "the ELK way" here.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.