Why is the hard disk space varying so dynamically?? The above is the cumulative hard disk space used from 5 desktop end points.
To get the the above visualisation for used space, I used sum aggregation followed by cumulative sum aggregation. I have attached the screen shot.
Also note that the beats are continuously active and are not going into inactive mode? Could it be because of the irregular count from the metric beat as it is represented in the following diagram
Hi @jsoriano,
I want to get the total hard disk space used in all the 5 desktop end points. So for this I am using the following procedure:
take the individual average of cpu usage of individual desktops
Sum each of the values from above together and display it
For the 2nd step I am using series aggregation...
Without series agg, I get individual components of each host
I also had a question: the metric aggregations like avg, sum are dependent on the time interval in which they are aggregated. So what is the time interval for which they are calculated. Is it dependent on the beat's frequency (period of sending beats)
Hi @jsoriano,
If you notice in the above 3 screenshots, the total hard disk represented in green color has changed from 1.4 tb to 1.8 tb to 2.3 tb as I have changed the time interval on the right side corner.
But I want it to remain 2.3 tb because it is the total capacity of the 5 desktops I am monitoring throughout whether I keep the interval for 24 hrs or 7 days. I want it to remain like this even if the desktops are switched off. How can this change be incorporated? I have used following in the visual builder:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.