So the logs I'm dealing with has a "time to process" field, which is being extracted by menas of a grok filter. The problem is that the field can take one of many forms:
No problems with the grok, its fairly simple regex to match them, what I want to do is to be able to graph that field in kibana, but obviously Elasticsearch will see that as a text field, and as such I cant AVG, SUM, etc...
(Im trying to get a pie/bar chart of processing times)
So my question is, is there some kind of transformation I can apply to those values within logstash (or anywhere else in the ELK pipeline) that will boil the different values down to milliseconds, so that I can then average the values in Kibana?
Thanks in advance.
Ideally you want to normalise these so they are using (eg) seconds as the base unit, and not the mix you have.
Are the examples above the exact values in the data?
Hi Mark & thanks for the response.
Yeah normalizing (in the application) is not an option unfortunately, hence my question. Yeah those are obviously not complete log entries, but are samples of the field in question direct from the logs. I think Ive found a possible solution using the ruby filter for Logstash.
What I know about Ruby can be written on the back of a postage stamp, but from what I can tell, it should be possible to check for "(µ|m)?s" & multiply or divide based on what the match is, and in effect, get Logstash to normalize them.
If you happen to know Ruby and can offer any code snippets, Id be most appreciative.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.