Kibana Search for greater than or Less than values in string type Field


I have a document which has information related to Methodlogger.

My log line looks like this


Timestamp, Log_type, Message are different fields in a single document .

The message field contains information as

getMethodtime:-234 ms

Is it possible to make a search based time greater than 1000ms as the Message filed type is string.

Regards :tiger:

It's possible, but not in an efficient way. We could parse this field out with a painless script and then search over it as a number. Scripted fields can be added from the Index management page. Would that work for you?

A more efficient way would be to pre-process with ingest-node so we can move it to a different field.

Hey @jbudz Thanks for your reply.

Probably i was looking for some option.

I'm using painless scripting to make new fields. If i'm modifying my fields or creating new.
Will that effect my search time ?

A potentially more efficient way would be to update your ingest pipeline to also put the just the value in a field like . Make sure you set the field type to the proper numeric datatype (integer probably) and then you would have a key value pair of -234 . With that you could do a range query as described here:

Hey @stiltz I got your point, @jbudz has suggested the same thing to make changes in my injest node. My concern starts here.

The data flow with my Elasticsearch looks like this.

Filebeat sends data to Logstash, Logstash to Elasticsearch and my 
visualisations are in Kibana. 

I'm looking forward to make changes with ingest-node, will that effect my existing gork filter?

Still in confusion regarding making changes in grok filter.

@stiltz you have mentioned that I'll need to take care about the datatype.

Make sure you set the field type to the proper numeric datatype
(integer probably) and then you would have a key value pair 
of -234

can you help me on this. How do I set the datatype ?

Thank you guys for your response.
Regards :tiger:

painless scripts: yep, it'll be crunched at runtime so it will include a fixed overheard * the number of results. in practice i'm not sure the magnitude, it may not be that much.

grok: you got it, a grok filter to pull that number out of a field. The data type would be set independently, depending on how you manage types in elasticsearch.

Your logstash output to elasticsearch can be assigned a template, or you can set the type directly on your index for example.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.