STEP 4: Change Time Range to 2 year. Just by changing a time range result is different. It is still showing same Dates Dec 1, Dec 4, Dec 7 and Dec 8, but values are different.
Note, there is no change happend if you change time range for Last 30, 60, 90 or 1 year. It only happen if you select "Last 2 year" and beyond.
STEP 5: Simillar issues observed while working with Lens Pie Chart while taking a count on alias_mtd. This time there is change in date itself.
Notice date "2020-11-30" this date is not present in anywhere in those 4 index.
So what are you trying to achieve here in the end? If the bucket-size (or time) changes, I'm not surprised that the end result changes as well. Do you want to keep it at daily buckets or what is the question or goal?
Hi Xeraa, goal is to get (Sum of Amount of all Days) / (Number of Days) which is avg amount over days. I purposely split the data by date to post it in discussion forum to see what is going on. In actual case I wont.
What is your opinion on Lens pie change issue mentioned above? Why it changing a date which is not there in data? Isnt it is a case of wrong reporting and there is a huge bug in the application?
Regarding your comment "I'm not surprised that the end result changes as well", If number of days and amount is not changing then why end result change just by chaning a Time range? This is giving a wrong result and no one will trust report. It defeat a purpose all together.
@xeraa Seems there is so many disconnects, let me clarify on that.. for #2: I am not moving bucket interval from days to week. It is done by Kibana automatically. for #3: Ideally it could be pie chart or bar char. Other application like Tableau doesnt restrict which metric you want to plot on pie/bar. I want to show a pie chart to see the market coverage. To quantify how much market cover by which category, for us daily/MTD/YTD avgerage make more sence than sum. Hence Pie chart to plot average value for #4:@timestamp doesnt make any sence for us because data is not real-time. We have other date column on which we have to trust for our analysis. Our dashboard is doing analysis of Daily/MTD/YTD so just 1 bucket is of no use. for #5: 20 MB index is what I am experimenting on. Our actual index is much larger in GBs.
Quick question on this when you said "index with 50GB each" Does it mean total size/ each shard size/each segment size.
Solution of above issue:@xeraa we got a solution to proble I mentioned with the help of Elastic team. The solution is to increase a "Maximum buckets" property in advance setting to higer values like 2000 .
Thanks for all your help. Will reach out to you for any more queries on ELK.