Hi Everyone,
So the logs I'm dealing with has a "time to process" field, which is being extracted by menas of a grok filter. The problem is that the field can take one of many forms:
472.767µs
1.128953ms
1.480317956s
No problems with the grok, its fairly simple regex to match them, what I want to do is to be able to graph that field in kibana, but obviously Elasticsearch will see that as a text field, and as such I cant AVG, SUM, etc...
(Im trying to get a pie/bar chart of processing times)
So my question is, is there some kind of transformation I can apply to those values within logstash (or anywhere else in the ELK pipeline) that will boil the different values down to milliseconds, so that I can then average the values in Kibana?
Hi Mark & thanks for the response.
Yeah normalizing (in the application) is not an option unfortunately, hence my question. Yeah those are obviously not complete log entries, but are samples of the field in question direct from the logs. I think Ive found a possible solution using the ruby filter for Logstash.
What I know about Ruby can be written on the back of a postage stamp, but from what I can tell, it should be possible to check for "(µ|m)?s" & multiply or divide based on what the match is, and in effect, get Logstash to normalize them.
If you happen to know Ruby and can offer any code snippets, Id be most appreciative.
Warm regards
Dylan
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.