Too many buckets error on Monitoring page indices

I upgraded my cluster to 7.4.0. Metricbeat , Logstash, Elasticsearch and Kibana all were upgraded. After the upgrade, on the monitoring page, when I try to visit the Logstash pipelines page, I am getting the following error.

[2019-11-02T09:33:19,631][DEBUG][o.e.a.s.TransportSearchAction] [coord03.mypod] [.monitoring-logstash-7-2019.11.02][0], node[JQ2eDaZhT3O925R7b3YmVQ], [R], s[STARTED], a[id=vXWLisLYSAW18toXPJJX2A]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[.monitoring-logstash-7-2019.10.31, .monitoring-logstash-7-2019.11.01, .monitoring-logstash-7-2019.11.02], indicesOptions=IndicesOptions[ignore_unavailable=true, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=true, ignore_aliases=false, ignore_throttled=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, maxConcurrentShardRequests=0, batchedReduceSize=512, preFilterShardSize=128, allowPartialSearchResults=true, localClusterAlias=null, getOrCreateAbsoluteStartMillis=-1, ccsMinimizeRoundtrips=true, source={"size":0,"query":{"bool":{"filter":[{"term":{"cluster_uuid":{"value":"di5uP58WRfO6OmYzlXZaTg","boost":1.0}}},{"range":{"logstash_stats.timestamp":{"from":1572708790067,"to":1572712390067,"include_lower":true,"include_upper":true,"format":"epoch_millis","boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},"aggregations":{"check":{"date_histogram":{"field":"logstash_stats.timestamp","interval":"30s","offset":0,"order":{"_key":"asc"},"keyed":false,"min_doc_count":0},"aggregations":{"pipelines_nested":{"nested":{"path":"logstash_stats.pipelines"},"aggregations":{"by_pipeline_id":{"terms":{"field":"logstash_stats.pipelines.id","size":1000,"min_doc_count":1,"shard_min_doc_count":0,"show_term_doc_count_error":false,"order":[{"_count":"desc"},{"_key":"asc"}]},"aggregations":{"by_pipeline_hash":{"terms":{"field":"logstash_stats.pipelines.hash","size":1000,"min_doc_count":1,"shard_min_doc_count":0,"show_term_doc_count_error":false,"order":[{"_count":"desc"},{"_key":"asc"}]},"aggregations":{"by_ephemeral_id":{"terms":{"field":"logstash_stats.pipelines.ephemeral_id","size":1000,"min_doc_count":1,"shard_min_doc_count":0,"show_term_doc_count_error":false,"order":[{"_count":"desc"},{"_key":"asc"}]},"aggregations":{"events_stats":{"stats":{"field":"logstash_stats.pipelines.events.out"}},"throughput":{"bucket_script":{"buckets_path":{"min":"events_stats.min","max":"events_stats.max"},"script":{"source":"params.max - params.min","lang":"painless"},"gap_policy":"skip"}}}},"throughput":{"sum_bucket":{"buckets_path":["by_ephemeral_id>throughput"],"gap_policy":"skip"}}}},"throughput":{"sum_bucket":{"buckets_path":["by_pipeline_hash>throughput"],"gap_policy":"skip"}}}}}}}}}}}]
org.elasticsearch.transport.RemoteTransportException: [datawarm03.mypod][10.241.4.147:9401][indices:data/read/search[phase/query]]
Caused by: org.elasticsearch.search.aggregations.MultiBucketConsumerService$TooManyBucketsException: Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.
	at org.elasticsearch.search.aggregations.MultiBucketConsumerService$MultiBucketConsumer.accept(MultiBucketConsumerService.java:110) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.consumeBucketsAndMaybeBreak(BucketsAggregator.java:134) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator.buildAggregation(GlobalOrdinalsStringTermsAggregator.java:214) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.aggregations.AggregatorFactory$MultiBucketAggregatorWrapper.buildAggregation(AggregatorFactory.java:152) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.aggregations.bucket.BestBucketsDeferringCollector$2.buildAggregation(BestBucketsDeferringCollector.java:226) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.bucketAggregations(BucketsAggregator.java:143) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator.buildAggregation(GlobalOrdinalsStringTermsAggregator.java:238) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.aggregations.AggregatorFactory$MultiBucketAggregatorWrapper.buildAggregation(AggregatorFactory.java:152) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.bucketAggregations(BucketsAggregator.java:143) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.aggregations.bucket.nested.NestedAggregator.buildAggregation(NestedAggregator.java:129) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.bucketAggregations(BucketsAggregator.java:143) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregator.buildAggregation(DateHistogramAggregator.java:142) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.aggregations.AggregationPhase.execute(AggregationPhase.java:130) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:119) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.indices.IndicesService.lambda$loadIntoContext$18(IndicesService.java:1285) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.indices.IndicesService.lambda$cacheShardLevelResult$19(IndicesService.java:1342) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.indices.IndicesRequestCache$Loader.load(IndicesRequestCache.java:174) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.indices.IndicesRequestCache$Loader.load(IndicesRequestCache.java:157) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:433) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.indices.IndicesRequestCache.getOrCompute(IndicesRequestCache.java:123) ~[elasticsearch-7.4.0.jar:7.4.0]
	at org.elasticsearch.indices.IndicesService.cacheShardLevelResult(IndicesService.java:1348) ~[elasticsearch-7.4.0.jar:7.4.0]
	at 

Does this mean monitoring page is broken on 7.x ? Am I missing something?

I see the same thing on 7.4.2 .. in elasticsearch logs when trying to load the [Metricbeat Kafka] Overview ECS dashboard.

The 7.4.2 is actually broken release. Checkout this issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.