I'm still struggling with date arithmetic in the ES stack. I tried something that should have been trivial, coding along with an example presented at this url:
I decided to try this out in Kibana's DevTools tab. Following through the example, I was able to create an index and fill it, just as in the code blocks in the blog post show. I can execute the first, trivial GET request successfully.
But I have encountered a couple of issues:
I cannot see this index on the Kibana 'Discover' tab when I look for it (but that's a separate issue); and
Significantly, I encounter an error when I try to execute the second GET request, encountering an error "Cannot apply [-] to types [org.joda.time.MutableDateTime] and [org.joda.MutableDateTime]."
Here is the second GET query, pasted in from the url cited above is:
I'm using Kibana 6.4.1 and Elasticsearch 6.4.1. Clearly something has changed since Elastic 5.x that now sets dates to this rather unfriendly org.joda.time.MutableDateTime format. How do I change my bulk ingest to get something that will allow me to date arithmetic?
OK. I have now solved the time problem by invoking getMillis() on the date objects. And I also fixed a "deprecated" warning regarding using "inline" rather than "source". Corrected query pasted below:
So, other than the fact that I can't see my index in Kibana's "Discover" tab, I'm good...I'd appreciate any advice on this second matter. Or, on how to avoid this default representation using org.joda.time.MutableDateTime.
Just a side note: This script needs to be executed for every hit in your query, which in the above examples means all documents in your index.
This would be a use-case for using an ingest processor, that uses a script to calculate the duration when indexing and store it in an additional field. This will speed up your queries tremendously!
@spinscale--Thanks for the tip. I am aware that what I currently have is search-intensive. Another question I have is: is there some equivalent of a cron job that I could run periodically--say daily and presumably outside of peak hours--to update my time_to_closure statistic for an existing index?
Also, while we're on the topic of time processing, do you have any advice regarding another of my recent posts?
I'm still struggling with figuring out how to sweep through an index in time with a time coordinate--let's just call it t--whose value I control (say by updating in a loop) and comparing various time field values with t to determine things like what tickets are open at a given moment, and how long have they been open.
right now there is no equivalent of a cronjob. I think what you need to do is to run a update by query job, that searches for recently closed issues and use a script to calculate the difference. Again, you could potentially do this, when you update the closing time to save a roundtrip and have the data added immediately instead of a cron job (given your setup around Elasticsearch allows for this).
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.