I have two indices within a single data view that track 'compute cycles' in some unit like 'cycle-hours per month'. One of the indices is the total max available cycle-hours for each computer in the company's data center. The other index is a historical record of downtime of all of the computers in the data center. Both indices have hostnames that overlap as well as a common timestamp. What I'm looking to do is subtract the summed downtimes from the summed total available 'cycle-hours'.
What is the best method to achieve this? What I'm looking to do is create a line graph to indicate for any given month how many actual compute resources were available, taking into account downtime. I've tried lens, Sum Buckets, and TSVB and none seem to really achieve what I'm trying to get at. Runtime fields and logstash filters won't work well, either as this is an aggregation, not a per doc operation. Preferably there would be a way to incorporate this into a TSVB chart as that's where this will be ending up, integrated into a larger visualization that includes several other related stats.
Alas, if only it were that easy. I'm looking to subtract the monthly aggregated downtime from the monthly aggregated available resources. I know I need to create buckets and then subtract the buckets from each other... I think. But just doing it in a runtime field won't work since not every host has an exact one-to-one relation with the downtime. If I have say, 100 hosts that, on a monthly basis have a total number of 'compute cycles', and then I have a registration of which individual hosts were offline and for how long, I need to subtract the aggregate differences by month.
Right now I've got an index pattern and both indices are in that pattern and with a common timestamp so they can line up. But taking the aggregations and performing math on them is where I'm stuck.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.