Elasticsearch is not rebalancing

I have an index with 4 shards and 1 replica so that each server has 1
primary and 1 replica.
If the system is set up everything is well distributed.
If 1 server goes down the primary is moved to one of the replicas but if
the server comes up
again the system is not rebalanced. The primary says where it is and does
not move
back to its origin server.
Any idea whats wrong, is there a setting for this?

Regards
Bernd

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e2bb0dad-8540-4f34-87a1-253f984c2d22%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

There is nothing wrong, this is expected behavior.

It does not matter where the primaries are. Primaries and replica have
exactly the same information about cluster state and do exactly the same
amount of work. The only exception is that primary shards do the write
operation first before sending them to replicas.

So there is no need to "rebalance".

Best,

Jörg

On Mon, Nov 3, 2014 at 1:05 PM, Bernd Fehling bernd.fehling@gmail.com
wrote:

I have an index with 4 shards and 1 replica so that each server has 1
primary and 1 replica.
If the system is set up everything is well distributed.
If 1 server goes down the primary is moved to one of the replicas but if
the server comes up
again the system is not rebalanced. The primary says where it is and does
not move
back to its origin server.
Any idea whats wrong, is there a setting for this?

Regards
Bernd

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/e2bb0dad-8540-4f34-87a1-253f984c2d22%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/e2bb0dad-8540-4f34-87a1-253f984c2d22%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFsUoEbTCqCktpj1S8TyY-KvN9JE-e2Edru5LH81FByzg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Exactly what I see, that one server which has now 2 primaries has higher
load because of two primaries writing while bulk loading.
And this is the cause why I want to have a rebalance and the primary back
to its origin to have a well distributed load.
So no chance to realize this with ES?

But if the index is generated I get a well distributed system.
Or is this just luck and generating the index several times I get always a
different result?

Regards,
Bernd

Am Montag, 3. November 2014 13:09:53 UTC+1 schrieb Jörg Prante:

There is nothing wrong, this is expected behavior.

It does not matter where the primaries are. Primaries and replica have
exactly the same information about cluster state and do exactly the same
amount of work. The only exception is that primary shards do the write
operation first before sending them to replicas.

So there is no need to "rebalance".

Best,

Jörg

On Mon, Nov 3, 2014 at 1:05 PM, Bernd Fehling <bernd....@gmail.com
<javascript:>> wrote:

I have an index with 4 shards and 1 replica so that each server has 1
primary and 1 replica.
If the system is set up everything is well distributed.
If 1 server goes down the primary is moved to one of the replicas but if
the server comes up
again the system is not rebalanced. The primary says where it is and does
not move
back to its origin server.
Any idea whats wrong, is there a setting for this?

Regards
Bernd

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/e2bb0dad-8540-4f34-87a1-253f984c2d22%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/e2bb0dad-8540-4f34-87a1-253f984c2d22%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/36b7cba2-7d2c-49c0-8323-944961289f10%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I think the write load issue has not a relation to primary/replica shard
setting. All writes should go to all shards of an index.

If you have a shard count that is not dividable by the count of nodes,
write load could be skewed, so one or more nodes have more work than
others.

This can be fixed at index creation time by setting a shard number so that
each node carries the same number of shards for an index.

Best,

Jörg

On Mon, Nov 3, 2014 at 1:32 PM, Bernd Fehling bernd.fehling@gmail.com
wrote:

Exactly what I see, that one server which has now 2 primaries has higher
load because of two primaries writing while bulk loading.
And this is the cause why I want to have a rebalance and the primary back
to its origin to have a well distributed load.
So no chance to realize this with ES?

But if the index is generated I get a well distributed system.
Or is this just luck and generating the index several times I get always a
different result?

Regards,
Bernd

Am Montag, 3. November 2014 13:09:53 UTC+1 schrieb Jörg Prante:

There is nothing wrong, this is expected behavior.

It does not matter where the primaries are. Primaries and replica have
exactly the same information about cluster state and do exactly the same
amount of work. The only exception is that primary shards do the write
operation first before sending them to replicas.

So there is no need to "rebalance".

Best,

Jörg

On Mon, Nov 3, 2014 at 1:05 PM, Bernd Fehling bernd....@gmail.com
wrote:

I have an index with 4 shards and 1 replica so that each server has 1
primary and 1 replica.
If the system is set up everything is well distributed.
If 1 server goes down the primary is moved to one of the replicas but if
the server comes up
again the system is not rebalanced. The primary says where it is and
does not move
back to its origin server.
Any idea whats wrong, is there a setting for this?

Regards
Bernd

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/e2bb0dad-8540-4f34-87a1-253f984c2d22%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/e2bb0dad-8540-4f34-87a1-253f984c2d22%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/36b7cba2-7d2c-49c0-8323-944961289f10%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/36b7cba2-7d2c-49c0-8323-944961289f10%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHTHJVLmyFzzWP5R_1YchMA41wS9J%3D60ipdVgsRoXYf3w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

I don't understand what you are trying to tell me but to sum it up, a
rebalancing of primary nodes with ES is not possible.
-1 for ES.

So I have to stick with SOLR which can rebalance leaders "
/admin/collections?action=REBALANCELEADERS"

Regards
Bernd

Am Montag, 3. November 2014 13:43:11 UTC+1 schrieb Jörg Prante:

I think the write load issue has not a relation to primary/replica shard
setting. All writes should go to all shards of an index.

If you have a shard count that is not dividable by the count of nodes,
write load could be skewed, so one or more nodes have more work than
others.

This can be fixed at index creation time by setting a shard number so that
each node carries the same number of shards for an index.

Best,

Jörg

On Mon, Nov 3, 2014 at 1:32 PM, Bernd Fehling <bernd....@gmail.com
<javascript:>> wrote:

Exactly what I see, that one server which has now 2 primaries has higher
load because of two primaries writing while bulk loading.
And this is the cause why I want to have a rebalance and the primary back
to its origin to have a well distributed load.
So no chance to realize this with ES?

But if the index is generated I get a well distributed system.
Or is this just luck and generating the index several times I get always
a different result?

Regards,
Bernd

Am Montag, 3. November 2014 13:09:53 UTC+1 schrieb Jörg Prante:

There is nothing wrong, this is expected behavior.

It does not matter where the primaries are. Primaries and replica have
exactly the same information about cluster state and do exactly the same
amount of work. The only exception is that primary shards do the write
operation first before sending them to replicas.

So there is no need to "rebalance".

Best,

Jörg

On Mon, Nov 3, 2014 at 1:05 PM, Bernd Fehling bernd....@gmail.com
wrote:

I have an index with 4 shards and 1 replica so that each server has 1
primary and 1 replica.
If the system is set up everything is well distributed.
If 1 server goes down the primary is moved to one of the replicas but
if the server comes up
again the system is not rebalanced. The primary says where it is and
does not move
back to its origin server.
Any idea whats wrong, is there a setting for this?

Regards
Bernd

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/e2bb0dad-8540-4f34-87a1-253f984c2d22%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/e2bb0dad-8540-4f34-87a1-253f984c2d22%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/36b7cba2-7d2c-49c0-8323-944961289f10%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/36b7cba2-7d2c-49c0-8323-944961289f10%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/4a5ba3bd-7a65-4bb4-a054-e573d587b955%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.