I set search.max_buckets=10000 because our cluster was breaking at times when we got these messages in the log.
After that change, I went to the monitoring screen listing the nodes, it worked for "last 15 minutes" gets:
"no records that match" for "last 1 hour" or longer.
[2018-08-06T19:14:40,613][WARN ][r.suppressed ] path: /.monitoring-es-2-%2C.monitoring-es-6-/_search, params: {size=10000, ignore_unavailable=true, index=.monitoring-es-2-,.monitoring-es-6-, filter_path=hits.total,hits.hits._source.source_node,aggregations.nodes.buckets.key,aggregations.nodes.buckets.node_cgroup_quota.buckets,aggregations.nodes.buckets.node_cgroup_throttled.buckets,aggregations.nodes.buckets.node_cpu_utilization.buckets,aggregations.nodes.buckets.node_load_average.buckets,aggregations.nodes.buckets.node_jvm_mem_percent.buckets,aggregations.nodes.buckets.node_free_space.buckets}
org.elasticsearch.action.search.SearchPhaseExecutionException:
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:288) [elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:91) [elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.onFailure(ThreadContext.java:710) [elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) [elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.3.2.jar:6.3.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
Caused by: org.elasticsearch.search.aggregations.MultiBucketConsumerService$TooManyBucketsException: Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.