I am having an issue with a timelion visualization that calculates the percentage loss. It does this by summing one value from one index and subtracting another value from a different index, the dividing by the original sum value and multiplying by 100 (if there's an easier way to do this, i'm open to suggestions).
While this appear to work on the face, we have noticed that the range of the chart is very different based on how tight your timerange is. When viewing a longer timerange, the percentage seems to be significantly lower than when zooming in.
You've applied the .scale_interval function, which normalizes your data to an hourly value. When the time interval is less than an hour, it scales. up. This explains the difference.
You can work around this by removing the scale_interval function, or by choosing a fixed interval instead of auto.
Removing the .scale_interval seems to have inverted the problem. After doing that, the bigger time-range is accurate 9%-13% loss. But when zoomed in, it inaccurately reports 1.5-2% loss.
Specifying a fixed interval seems to have addressed the issue. I did have the max buckets issue for longer timeframes but since this server has a limited role with limited users, bumping that up in the Kibana time seems to have addressed that. And while it seems that I had some initial stability issues after doing that, scaling it back to only what was necessary (5000) for both the Max Buckets and the Timelion Max Buckets seems to have worked.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.