I want to use Elasticsearch as a timeseries database where I store an array of 850 sensor values every hour. The database will store data for several years. Now I want to query all the documents within a given time range and downsample them using a max aggregation. My question is: How do I efficiently compute the maximum value for each array index?
For example I have arrays of five documents and I want to aggregate them into a array of the same size containing the maximum values :
Array 1: [1, 0, 2, ..... ]
Array 2: [3, 2, 2, ..... ]
Array 3: [1, 1, 1, ..... ]
Array 4: [5, 2, 0, ..... ]
Array 5: [4, 0, 3, ..... ]
Max Array: [5, 2, 3, ..... ]
I currently use a scripted metric aggregation where I iterate over each of the 850 values for each document, which results in a quite low performance. Can this be achieved in a more efficient way?