Search preference

Hi,
we´re facing strange behaviour in our two data nodes cluster. Using kibana, we monitor that only one node is involved in search (say node1). All queries are routed to that one!

Monitoring kibana´s serach queries, 'preference' parm is used (with sessionID custom string) and this seems to be the origin of the problem. If we execute the same query without 'preference', all (two) nodes are involved in search.

Right now all primary shards ( 2 shards + 1 replica indexes ) are in node1, and all replicas in node2, if we reboot node1, all primaries go to node2 and all queries are executed in node2.

So, is this the expected behaviour, all queries resolved by just one node when using "custom string" as preference parm?

Is there a way to re-balance primary shards across cluster ? We have tried with 'cluster.routing.rebalance.enable' but is not working for us.

Why are you using preference in the first place?

I think it's the expected behavior as you are using a custom string.

Request body search | Elasticsearch Guide [8.11] | Elastic Says:

A custom value will be used to guarantee that the same shards will be used for the same custom value. This can help with "jumping values" when hitting different shards in different refresh states. A sample value can be something like the web session id, or the user name.

You are hitting always the same shards.

I'm not, kibana is using custom string in its queries.

Hitting always the same shards is ok, but why only primary ones? In my scenario, all primary shards in one node, that's a big problem, elasticsearch is using half of available resources!

I'm moving your question to #kibana as I don't know why it's done like this.

Getting "consistent" answers from ES may be useful, asking to primary shards only, in log analysis scenario, may be useful, since they´ll be more updated.

But then, we need a way to spread primary shard along the cluster. I´m little confused here, cluster.routing.rebalance.enable seems to be useful for that, but is not working for us (it´s enabled and all primary shards still in the same node), on the other hand, https://github.com/elastic/elasticsearch/issues/3293 seems to be not possible to do.

[How] can we spread prymary shards across cluster?

Moving back to #elasticsearch

This is going to be solved in 6.0 with:

Thanks to @javanna for the link :smiley:

Thanks! Very interesting link. In the meantime, is it possible to rebalancing primary shards across cluster or not? I´m confused about this.

I think there are a "few" 2 data nodes clusters out there using Kibana. All we are using 50% of resources!! Rebalancing primary shards is the only way to go (I think) and it seems to me like a CRITICAL issue, maybe a backport is worth considering.

I don't know. You can always comment on the issue and ask for it but I'm really unsure it will be backported.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.