I have an index with about 10,000 csv data which contains timestamp, as I want to see the trend of count in one day, I choose @timestamp per hour in kibana, then I find that decimals appear in the histogram like this:
This may seem confusing, but here's what's happening:
You've selected "hourly" as the interval. It looks like your time range is about three weeks, which means that if we were to show a bar for every interval, that'd be about 500 bars! That's too many to show, so behind the scenes we automatically scale the interval to something that doesn't create so many bars. In this case, we select 3 hours. (Notice the "Scaled to 3 hours" text next to the interval?)
Here's where it gets a bit controversial. Since you've selected "Hourly", we assume you really want to see the count per hour, not per 3 hours. So we take the count for that 3-hour period and divide it by 3 to give you the average count per hour in that time range.
There's some discussion that has happened around this, and whether we should just show the count for that 3-hour period instead of showing the average per hour for that 3-hour period.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.