I have an 8 data node ES cluster (a development one with about 4TB of data in 2 billion documents) with Kibana attached and I'm looking for a way to limit the size or scope of the queries that Kibana can request. I've managed to lock all the ES nodes twice now (on mostly purpose) by doing very heavy aggregation queries over a large number of shards and time spans. The search threads are all consumed and the ES process stops responding to service requests and has to be killed. Since I'm building this for production and would like to be able to turn less trained analysts loose on Kibana, what can I do to make sure they can't kill my cluster?
What version are you on?
There are some soft limits in the latest versions, with more coming in v5, to help with this. But otherwise it is limited.
We just moved to ES 2.3 and Kibana 4.5 in the last week. What soft limits do you mean?
What is the "from + size" in this context?
It's just an example of the things we are putting in.
Great, thanks for the information, I will check those out and look forward to more.