Kibana query preference leads to unfair load balancing

We have a cluster made of 3 data nodes. Our indexes are configured with 3 shards and 1 replica. For whatever reason, shard allocation on nodes is currently as follows:

node1     node2    node3
R0 R2     P0 P1    P1 R2

As you can see, Node1 is assigned only replica shards.

When sending queries to ES, Kibana sets the preference to a fixed value determined at the time Kibana is first loaded in the browser. Unfortunately, in our case the set preference directs the request to shards allocated on Node2 and Node3 only - nothing on Node1. Result is Node2 (for instance) is overloaded with twice the work assigned to Node3.

Is there a way to tell ES to take into account the node hosting each shard and try to balance the load between them ?

You should probably ask this in too

I'm fairly certain that there is nothing about the preference that Kibana sends that would prevent node1 from serving the requests. There must be some other elasticsearch setting that is leading to this behavior.

I will cross post this issue in the Elasticsearch channel too.
FYI, I tested the same query with different preference values: sometime all nodes are hit, sometimes only node1&2, etc... With no preference at all all nodes are hit everytime.

What preference does KB send?

It sends a _msearch query like this one:

  "index": [
  "ignore_unavailable": true,
  "preference": 1484556494830
  "size": 0,
   "query": {
    "bool": {
      "must": [

Notice the preference which is actually a timestamp.

Yeah, the preference value is just the time that the page was initialized, so it should be unique-ish per tab

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.