Help me to convert this MySQL query to elasticsearch (json) query and how to visualize it on kibana.
SELECT
user,
call_date,
sum(length_in_sec) as 'work_time',
(max(end_epoch)-min(start_epoch))-sum(length_in_sec) as 'idle_time'
FROM vicidial_log
WHERE date(call_date)='2016-01-12'
group by user
If you provide details on how you've structured data in Elasticsearch, it may be easier to help you. What type of visualization are you looking for... a table? In that case, your SELECT would map to metrics in the Y-axis, Date is just something you'd select using a time picker, and GROUP BY is a Terms Aggregation.
I think I understand the reason for your question now - you are not indexing "idle time" and "talk time" directly in ES. You should be able to calculate the "idle time" and "talk time" as scripted fields in Kibana, assuming you can do that based on data from a single document. If your calculation needs to go across document, you might need to use a script as an input to Date Histogram (see example here). Hope this helps!
Thanks a lot Bragin,
You got my question. Now I understand the scripted field based on data from a single document and had a idea about Date Histogram. As you mentioned, My case is need to go across document. Now my question is how to get the maximum and minimum value of a field from across document? Would you please explain me a little bit detail.
Now I partially complete the bar chart that represent the sum of call length Vs user and call date. How to apply the scripting function for visualize an additional bar parallel with existing bar to represent the Idle time?
hint:
Idle time of an employee in particular date is calculate by the following equation,
idle_time=(max(end_epoch) - min(start_epoch)) - sum(length_in_sec)
max(end_epoch), min(start_epoch) are have to calculate for each and every day.
And as I mentioned in earlier, the Minimum and Maximum values are have to get from across the field. Then as mentioned in the image, want to display the idle_time like this.
You may need to do something more complicated, such as pass a script to the date aggregation in order to do some calculations across the returned buckets.
Notice in the example I linked to in the thread above, the script actually goes through values returned in the buckets and calculates an array of durations, returning that as a result:
start = doc['recordStart'].value;
duration = doc['recordDuration'].value;
l = [];
for (long i = 0; i < duration; i += 60000)
{ l.add(start + i); };
return l;
Some how I got my first visualization of my task. Still have some issue in scripting. As you mentions, I'll have a deep look on that thread and catch you soon. Anyway thanks for your support.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.