Hi Robert, thanks for your reply. My apologies for not making my input data structure and my visualisation setup clear.
Every document in my index has a field with 'responsetime' which is stored as a long. Response time is being measured in milliseconds. I have filtered all my data to only include documents with a response time of less than 3000 milliseconds because any extreme values affect the histogram by not allowing me to create interval sizes smaller than 300,000 milliseconds:
I think my issue is that I want to plot two lines measured by different metrics but plotted on the same scale. The blue line in my graph from the original post should be the count of response times in each interval of 30 milliseconds from 0 to 3000 for one day. I am filtering this specific day using the date range sub aggregation in split series. I have used weird dates due to the timestamps of my input data but the concept is to have one day and then the previous week as the two ranges (see screenshot below):
Screenshot showing data range set up:
The green line should be average count of each interval for a week's period. For example, say the 90 millisecond bucket has a count of 6300 for the entire week (green line), so to make it comparable to the single day (blue line) I would want to divide the bucket by 7 to get an average count of 900. Note that I am averaging the count of each bucket and not averaging the response time itself. The idea is the distribution should how varied the response time is on the day compared to the entire week.
Unfortunately I don't think the TSVB will help me because my X-axis is a histogram of the response time values from 0 to 3000 at 30 millisecond intervals and not a date-histogram.
Thanks again and I hope my explanation was clear!