I believe I have read all current filter docs, but still I'd like to get input from more knowledgeable folks: I want to parse with logstash curency counter information, structured like:
The input is a text file, shipped in via filebeat:
2016-12-22 18:59|wallet|USD:100:8,USD:200:4,USD:500:0,USD:1000:1,USD:2000:6,USD:5000:1,USD:10000:3
2016-12-22 18:59|piggybank|USD:100:47,USD:200:1,USD:500:0,USD:1000:8,USD:2000:15,USD:5000:0,USD:10000:1
My loglines may contain all ISO currency codes.
Depending on the currency there may be varying repetitions.
The output should enable me to generate stacked bar charts over time, either in Kibana or maybe even with Jupyter Notebooks utilizing matplotlib.
I will import my own data and maybe my wive's -- but I will have an identifier field to distinguish.
After reading the docs I am tempted to use a ruby filter.
Is that the best fitting strategy?
(I will be perfectly able to dabble with the code myself, but rather want an educated hint if that is the right direction.)
By the way, when I figured out to use .to_i I was mightily impressed that it ended up in "visualization" as a selectable filed in the drop down list because it turned into a number in elasticsearch.
(I deleted the index after each test, so I also got a fresh "manage index" on every test run.)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.