The server crashes when we request data. The time-range is very less. Not more than 7 days.
But when checked, the bucket size was set like 29 or more and there are aggregations. And there are multiple visualizations like this in the dashboard.
What do you think an ideal bucket size should be and why? What all should you consider while defining bucket size at time of designing the dashboards?
29 buckets is not an abnormally large amount and Kibana should be able to handle this. In fact, if you do a simple line chart with a Date Histogram as your x-axis, you will get many many more buckets than that.
How exactly is the server (Kibana or Elasticsearch?) crashing? Are there any logs to indicate it's running out of memory or something else?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.