Custom visualisation with Kibana

Hello,

I'm planning to use Kibana to display the worst screens displayed by users.

I load the data from our transaction monitors in Elasticsearch using logstash.

For each transaction, I have the duration of it loaded.

I already have a visualization to display the total number of transactions for a given day, and the average duration for the same day.

But historically, we are following the performance of the screens, in order to identify the screens we have to optimize.

To do so, we are using an Excel workbook for the computation, who are quite complicated.

Frankly, I spent a few hours understanding the algorithm and porting it in Python.

Now I want to display the result of this python script in Kibana.

Is there a way to do so ? Loading the data back into Elasticsearch is not a good way to do it, because I want to be able to display the 20 worst screen for a given period (could be a day, could be a week or a month, it's up to the person who wants to display the datas)

So the script tooks two parameters : the start date and the end date.

Is there any way to call this script from Kibana and display the result in a table form ?

Thanks for your answers.

AFAIK, there is no possibility, Kibana is simply calling elasticsearch Rest API to restrieve the data.
I would suggest to implement the same logic in your logstash pipeline,you can use ruby filter and code your logic inside using ruby and save the data with your transactions into elasticsearch

I was afraid of this answer, but I don't think that is going to be possible to use logstash because logstash works on each event loaded into ES.

But I want a synthesis per period.

I'll continue to dig on this. I'm wondering if a Vega viz could do the trick.

May be if you describe what kind of metrics you are looking for we can help
There is muliple possibilities in Kibana (TSVB, Canevas, tranform, Vega ....)

Well, I'm loading events in Elasticsearch with the following template:

  • Screen id
  • Duration

and some other unrelated data for this problem.

I want to have the 20 worst screens and for this, the algorithm is the following :

I count the total number of calls made to this screen for this period (AKA, the number of occurences of this screen in ES)

I sort this in an array and note the position of each screen in this array

Then, I sum all the duration for this screen.

Again, I sort this in an array and note the position of each screen in this array

Then I do the sum of both positions and again, I sort this in another array => This array show the worst screen for us.

Because a screen that has a very high duration but is not called very often, well it's a shame but I don't care.

A screen that is called very often but has a very low duration is a great one and I don't want to see it in this ranking.

I don't know if my explanations are clear enough, I can post the python code if needed .

1 Like

Why you don't simply aggregate the avg(Duration) per page and get the top 20 with high average ?

because the number of times the screen is called is very important too. An high average with a low number of calls is not relevant because, well it's sad but we are not going to do anything for it.

I guess a scripted metric will do the job

1 Like

I'll give this a try, thank for your help.

This kind of more-complex calculation is frequently solved by Elasticsearch transforms. You can pre-aggregate your data into a separate index, and then visualize.

I was thinking about running my python script every day at 12:00AM, but I'll have a look into transforms. Thanks !

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.