Too many buckets

Hello

In environment with almost 200 hosts, each sends metricbeat (i.e. network metrics every 30 seconds) to single Elasticsearch cluster. There is a business requirement to visualize incoming network traffic bandwidth on specified network interface.
Due to volume of hosts/data and that rates must calculated with derivative aggregation first, trying to show timeframe larger than few hours ends with too many buckets error.

It would be most convenient to store aggregation result in other index for further/other processing.
I'm new to Elastic so maybe there is a better approach to achieve this.
GET _cluster/settings?include_defaults=true&filter_path=transient.search.max_buckets

{
  "transient" : {
    "search" : {
      "max_buckets" : "1000000"
    }
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.