We have a situation where in general, end-users of our app shouldn't hit 10000 (or whatever) buckets, but when trouble-shooting expert-users might, and understand the consequences of doing so.
Right now, the only way to achieve this is to undo the cluster setting for these user, run the query and set it back.
This limit has been put in place to protect against bad query that can kill a cluster. It is a cluster setting that only admins should change so wouldn't it be an issue for them if this value is overridable on a per query basis ? Expert users shouldn't have to override this value, they know how to build an aggregation that will not return too many buckets, aren't they ? I can't think of a case where I need to return more buckets to troubleshoot an issue ? Do you have concrete examples in mind ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.