Resource Management of all requests in the system

Hi,
I am looking for some feedback on my understanding and question related to resource management done in ES. Today in ES, I see there are circuit breakers which provides a way to manage the high JVM heap usage and then thread pool queues (with max size) that protect from overloading a cluster with too many requests. However, the circuit breakers are best effort and doesn't protect from all the cases. I am wondering why ES has not explored/adopted the option of having a workload management kind of layer which manages the global resource available in the cluster and take decisions of admitting/rejecting requests based on that ? In ES I don't see any way of figuring out, how much memory a search request is expected to consume similar to what a traditional distributed database with cost based planner can do ? This also makes the circuit breaker world unreliable to prevent nodes from OOM or high resource usage, because now there are search requests in the system which can execute and consume memory which is not accounted for.

There is _explain which is akin to that, but it needs to run the query to figure it out. This is due to the nature of Elasticsearch being a distributed system, the data inside a cluster also being distributed, and the relative nature of scoring.

We are always working on making resources management better! We've added other things like the ability to cancel a query, and there's more on the way. That said, things have come a long way, and circuit breakers should protect against most things.

Is there an underlying problem or question you're looking to answer here, is your cluster running into issues?

+1 to what Mark said.

I'm curious to hear the names of some specific "traditional distributed database" systems that perform global resource management as you describe. I think node-level resource management (i.e. circuit breakers) is the more common approach as it involves a good deal less coordination and therefore scales up much better.

The circuit breakers aren't perfect (yet) but they're still very much an active area of development. Each improvement there lets clusters run closer to the edge, which always uncovers another bottleneck to work on.

@warkolm & @DavidTurner Thanks for your response. Systems like Presto/Impala (using YARN)/etc seems to work on model of global resource management. They have concepts of queues for multi-tenancy and resource distribution across those queues. Using statistics on data some estimation on resource usage of queries are done, and those resource constraints are enforced by spilling to disk mechanisms. But these systems also tend to be read heavy.

I can understand that since circuit breakers decision are locally made rather than a global coordination it helps to scale better. But I am just trying to compare it with a model of global coordinator making the decision of admitting each request based on resource available vs required by requests. With ES execution model where any node can behave as a coordinator of a request, it doesn't uses the state of the destination node which will actually serve the request. This I think can cause certain problems which global coordinator may not have. For example: Let's say a node N1 running hot on JVM usage, another node accepts the search request and forwards it to N1. This can cause ES on N1 to crash, since depending upon the search queries which uses unbounded caches can lead to ES process crash. I have seen such cases where with default circuit breakers configuration, and high read/write traffic, the ES process on multiple nodes run hot on JVM and eventually starts crashing. Though it was an older version 6.x of ES. Also is there any recommendation on how to configure circuit breakers based on indexing and search traffic ?

P.S. I am new to ES community and trying to understand the ES behavior.

If you have a high throughput system already in place, there's only so much that circuit breakers can/will do, so tuning those isn't likely to have a massive impact compared to adding more resources to the system.

This is very much an It Depends answer. You would need to test based on your use case, cluster, config, resources etc.

Thanks for the pointers @sohami.

Impala's native model is pretty similar to Elasticsearch's, with imprecise (distributed) limits on the number of concurrently-executing searches, the lengths of queues, memory usage and so on. I think the big difference here is that Impala can apparently determine the resource requirements of a query in advance of its execution more accurately than Elasticsearch can today; as I mentioned before, this is an active area of development for us. Bounding memory usage by spilling to disk has definitely been discussed too, so watch this space for developments there.

Presto and YARN do have a single centralised resource manager that can make global decisions, but I don't think it would work for Elasticsearch to involve a centralised resource manager in every search like that. This would impose a cluster-wide limit on throughput. YARN in particular looks to be designed for much "chunkier" jobs than individual searches. Making this kind of architecture reliable (and to fail over reliably) is also very hard.

Crashing under load, rather than pushing back by failing the search, is a bug. It's not a question of configuration (although it may be possible to find a temporary workaround in the config). The proper action to take is to report it as a bug so we can fix it.

Changes to the circuit breakers introduced in 7.0 fixed quite a few cases where this might happen. Reiterating: this is an active area of development; please report bugs like this if you find them.

3 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.