Kibana Time Series Visualization: group fields by values of another field

Hi,

Context:
we have performance testing data coming from JMeter and from multiple test machines/hosts as well. So we've added a jmeter_host_id field to all of our samples to be able to distinguish inbetween these.

Goal:
I'd like to create a dashboard where different aggregation of the response time, like: average, 95pct, maximum are all visible in one panel and also grouped by the jmeter_host_id.

Attempt 1:
This is where i started from, no grouping and the legend gets displayed correctly:

Attempt 2:
Then i tried the Group by Terms setup in the TSVB panel. Grouping works, but on the legend the aggregation information is no longer visible. Frankly i'm not completely sure what the Top: 10 parameter does here either:

Attempt 3:
This one is a "lens" type panel:

Questions:

  1. To me Attempt 2 and 3 look kinda same. Are these the right solutions or is there a better one?
  2. The only difference i see in-between A2 and A3 is that the Legend in A2 is messed up. Can this be fixed somehow?
  3. What the Top parameter in A2 does exactly? Let's say maximum number samples per second could be 15000. If i want to get proper numbers, should i set it to 15000 instead of 10?
  4. Kinda the same question for A3, not sure what the Number of values parameter does here.

Thank you!

  1. That looks to me like the right solution, it might look better if you change you color palette or play with the chart to make it stacked bars or lines.
  2. Lens is the way forward in Kibana right now, you can add a bug for TSVB in our Github repo for the labels being messed up, but It might take some time to get a fix.
    3&4. The Top and the Number of Values are the same thing, they represent the number of samples that are taken from each shard in Elasticsearch( it will take the top 10 values of that field from each shard, then combine all the shards and aggregate that data and send it to Kibana). The bigger the number, the more accurate the data is, but at the same time it will be a whole lot more taxing on the performance. 15000 is not a good number. I would say 10-50 is a good start, it all depends on how fragmented the data is.
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.