It's possible, but not in an efficient way. We could parse this field out with a painless script and then search over it as a number. Scripted fields can be added from the Index management page. Would that work for you?
A more efficient way would be to pre-process with ingest-node so we can move it to a different field.
A potentially more efficient way would be to update your ingest pipeline to also put the just the value in a field like getMethodtime.ms . Make sure you set the field type to the proper numeric datatype (integer probably) and then you would have a key value pair of getMethodtime.ms: -234 . With that you could do a range query as described here: https://www.elastic.co/guide/en/beats/packetbeat/current/kibana-queries-filters.html
painless scripts: yep, it'll be crunched at runtime so it will include a fixed overheard * the number of results. in practice i'm not sure the magnitude, it may not be that much.
grok: you got it, a grok filter to pull that number out of a field. The data type would be set independently, depending on how you manage types in elasticsearch.
Your logstash output to elasticsearch can be assigned a template, or you can set the type directly on your index for example.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.