How to manipulate response data in Kibana?

Hi there,

I am switching from Splunk to ELK and I want to generate a summary errors report showing errors count broken down by error message. Now the error message can have random numbers in it that will make report a bit lengthy. To address this issue, I want to remove any digits in the message so I end up having unique error messages.

In Splunk, I am able to achieve this via "eval" command:

host=hostA "" | (?P[\w\d\s.*:()#=-|%<>/,{']+) | eval message=replace(message,"\d+","") | eval message=substr(message, 1, 120)
| stats count by message

I don't know how can I achieve the same thing in Kibana ?

Thanks
/Sam

The equivalent in Kibana would be scripted fields

You can use the painless language to write a script which is executed on each document, treating the return value of the script as the value of a new "virtual" field. It behaves like other fields, so you can run aggregations on it, search by it, ...

Please note that the problem with scripted fields is that they are executed during query runtime, this means there won't be a prepared search index for them. This can make them slow if a lot of data is hit.

I always recommend users to start with a scripted fields to see what kind of processing they need on their documents. If the performance is good enough for your use case, then everything is fine - but if requests get too slow, you can always move the processing logic into the ingest phase so it will get executed once when a new document gets ingested into Elasticsearch. This means they will be properly indexed and queries will stay very fast even for very large datasets. This can be done with Logstash or with an ingest pipeline. But as mentioned before, scripted fields are the right starting point - if they are not sufficient in terms of performance, pre-processing before ingesting would be the next step.

2 Likes

Thanks very much Joe, I will look into scripted field support then.

Cheers
/Sam