Load balance on heterogeneous nodes in a cluster

I am now evaluating elasticsearch as our text search solution. But the
problem is that we cannot guarantee that we can always allocate same
hardware for our cluster when new nodes are added, therefor we need a
solution to distribute the load in a smart way based on the machine power.

I read the document and source, I found there is a BalancedShardsAllocator
for balancing the shards between nodes with consideration of shards count.
But basically, the BalancedShardsAllocator still considers the nodes in the
cluster as homogeneous.

It seems that we can implement our own ShardsAllocator to distribute shards
by predefined machine factor(the simplest way maybe), I want to know
whether there is something I missed or there is already some built-in
function affording the ability we want?

And I also have the related second question, currently our search is not
IO-bound because we have big-enough memory on all of our machines but there
are different counts of cpu cores in every machine, I want the client
search can be distributed to nodes based on the count of cpu cores rather
than simple round-robin. Is there any way to do that?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/35708394-19e7-4ab2-ab1a-f632039da26e%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi,

please have a look at shard allocation, where you can define, that specific
indices should be put into your stronger or weaker boxes (or whatever
criteria you define). This might be sufficient for a first try

If you need more sophisticated logic (based on CPU power or memory), you
could write your own decider, but I think you can go a mile with shard
allocation.

This also applies to your second question, where you could put the indices,
which are searched a lot, on the more powerful machines to make sure that
indexing and querying is as fast as possible.

Hope this helps...

--Alex

On Mon, Feb 3, 2014 at 8:44 AM, xzer LR xiaozhu@gmail.com wrote:

I am now evaluating elasticsearch as our text search solution. But the
problem is that we cannot guarantee that we can always allocate same
hardware for our cluster when new nodes are added, therefor we need a
solution to distribute the load in a smart way based on the machine power.

I read the document and source, I found there is a BalancedShardsAllocator
for balancing the shards between nodes with consideration of shards count.
But basically, the BalancedShardsAllocator still considers the nodes in the
cluster as homogeneous.

It seems that we can implement our own ShardsAllocator to distribute
shards by predefined machine factor(the simplest way maybe), I want to know
whether there is something I missed or there is already some built-in
function affording the ability we want?

And I also have the related second question, currently our search is not
IO-bound because we have big-enough memory on all of our machines but there
are different counts of cpu cores in every machine, I want the client
search can be distributed to nodes based on the count of cpu cores rather
than simple round-robin. Is there any way to do that?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/35708394-19e7-4ab2-ab1a-f632039da26e%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGCwEM8oNLh3jVwDuTBaVnzgx-nJThWuEYjaQVr%2B%3DUpwb%3DiUGg%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Right now, ES can control total number of shards on a node, but not a CPU
strength factor. You have to write your own decider that is based on the
criteria you gave.

Note, the mere count of CPU core may not be reliable for an allocation
decider, because it does not determine the total shard processing capacity

  • there can be very strong cores, or weak cores on different CPUs.

Even if you write a CPU strength based decider, the weakest node will still
determine the overall performance in query and index operations. That is
you can not compensate weak CPU power by an allocation decider.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoE2%3Dm%2Bg5Vhc%3DpskFyJsz9EVurx2x7-h2Pe3kozC44dPbA%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks for the replies, we are now considering and discussing our balance
policy, all the information is helpful.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d5c82e84-9680-4d11-8e8f-d7fa0a9fcb37%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.