On our site, users can perform searches. We log the start time and end time of each search, and are now curious to find all the searches that took longer than X time.
I have a Metadata object containing a FromDateTime and a ToDateTime. Both are formatted as DateTime objects.
I was looking for a way to do something like this:
(metadata.searchTime.fromDateTime - metadata.searchTime.toDateTime) > 1000ms
Obviously over simplified, but i hope it gets the point across better.
Hi @AlanMark, this sounds like a good job for scripted fields. You can configure them as part of the index pattern and basically add a "virtual" field that contains the time span in milliseconds. Then you can filter by that field like you would do it with a regular field.
If you have lots of data it might make sense to consider doing this calculation during ingest to avoid calculating the duration for each document for each query.
I'm fairly new to the ELK stack - i actually thought it would be best practice to only log base info, and do calculations based on these when querying - however it seems like it may be best practice to do all calculations ahead and ingest these along with the object?
Hi @AlanMark, there are no "ad hoc" scripted fields like this, they have to be part of the index pattern.
Whether to do calculations like this while querying or while ingesting is mostly a trade-off between flexibility and performance. It depends on your use case which one is more important. If your data set is small and doing the calculations on query time is still fast enough for your use case, it is probably the right choice.
In some cases it can get problematic, e.g. if you have peta-bytes of data and want to filter out a few documents based on a scripted field - then the scripted field would have to be calculated for each document in your index.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.