I am trying to visualise occurrences of all the possible "utm_source". I can't enable Regex on this cluster because it's a managed service outside of ELK directly. Is it possible to use painless, like my example below, to match everything between "utm_source" and the following "&"?
I am trying to use this in the advanced JSON element in a visualisation. This is a temporary query on historic data so fixing the parsing isn't really a good solution for me. The position of "utm_source" is not consistent.
Then you can use this field like any other fields (knowing it's a runtime field, calculated when the query is executed)
If you have an older Kibana you could do similar stuff with scripted fields
Best,
Matthias
Hi @matw Thanks for your reply and that is a great suggestion.
Unfortunately the only option I have presently is using the Advanced JSON inside of the visualisation.
We are using a managed ELK service through Logz.io and they are not yet running 7.12 and even if they did, i'm not sure if they would expose this capability or not.
Is there any chance your script can be persuaded to work inside of the visualisation?
any field calculated when the query is submitted has worse performance. So scripted fields are disabled on logz.io? here's a discuss issue where someone succeeded using the JSON input, didn't test:
However all approaches have this performance drawback.
Best,
Matthias
Yea i realise the performance impact is present regardless. I guess their logic is that it's easier to cause 'more slow' when the scripted field is present over a handful of visualisations, but yea, scripted fields are disabled on Logz.io.
I have had some success using manipulating using the visualisation based on this guide but where I run out of talent is programming the logic which reads something like "from utm_source to the next &". Assuming it's even possible.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.