I have an index with 4 shards and 1 replica so that each server has 1
primary and 1 replica.
If the system is set up everything is well distributed.
If 1 server goes down the primary is moved to one of the replicas but if
the server comes up
again the system is not rebalanced. The primary says where it is and does
not move
back to its origin server.
Any idea whats wrong, is there a setting for this?
There is nothing wrong, this is expected behavior.
It does not matter where the primaries are. Primaries and replica have
exactly the same information about cluster state and do exactly the same
amount of work. The only exception is that primary shards do the write
operation first before sending them to replicas.
I have an index with 4 shards and 1 replica so that each server has 1
primary and 1 replica.
If the system is set up everything is well distributed.
If 1 server goes down the primary is moved to one of the replicas but if
the server comes up
again the system is not rebalanced. The primary says where it is and does
not move
back to its origin server.
Any idea whats wrong, is there a setting for this?
Exactly what I see, that one server which has now 2 primaries has higher
load because of two primaries writing while bulk loading.
And this is the cause why I want to have a rebalance and the primary back
to its origin to have a well distributed load.
So no chance to realize this with ES?
But if the index is generated I get a well distributed system.
Or is this just luck and generating the index several times I get always a
different result?
Regards,
Bernd
Am Montag, 3. November 2014 13:09:53 UTC+1 schrieb Jörg Prante:
There is nothing wrong, this is expected behavior.
It does not matter where the primaries are. Primaries and replica have
exactly the same information about cluster state and do exactly the same
amount of work. The only exception is that primary shards do the write
operation first before sending them to replicas.
So there is no need to "rebalance".
Best,
Jörg
On Mon, Nov 3, 2014 at 1:05 PM, Bernd Fehling <bernd....@gmail.com
<javascript:>> wrote:
I have an index with 4 shards and 1 replica so that each server has 1
primary and 1 replica.
If the system is set up everything is well distributed.
If 1 server goes down the primary is moved to one of the replicas but if
the server comes up
again the system is not rebalanced. The primary says where it is and does
not move
back to its origin server.
Any idea whats wrong, is there a setting for this?
Exactly what I see, that one server which has now 2 primaries has higher
load because of two primaries writing while bulk loading.
And this is the cause why I want to have a rebalance and the primary back
to its origin to have a well distributed load.
So no chance to realize this with ES?
But if the index is generated I get a well distributed system.
Or is this just luck and generating the index several times I get always a
different result?
Regards,
Bernd
Am Montag, 3. November 2014 13:09:53 UTC+1 schrieb Jörg Prante:
There is nothing wrong, this is expected behavior.
It does not matter where the primaries are. Primaries and replica have
exactly the same information about cluster state and do exactly the same
amount of work. The only exception is that primary shards do the write
operation first before sending them to replicas.
I have an index with 4 shards and 1 replica so that each server has 1
primary and 1 replica.
If the system is set up everything is well distributed.
If 1 server goes down the primary is moved to one of the replicas but if
the server comes up
again the system is not rebalanced. The primary says where it is and
does not move
back to its origin server.
Any idea whats wrong, is there a setting for this?
I don't understand what you are trying to tell me but to sum it up, a
rebalancing of primary nodes with ES is not possible.
-1 for ES.
So I have to stick with SOLR which can rebalance leaders "
/admin/collections?action=REBALANCELEADERS"
Regards
Bernd
Am Montag, 3. November 2014 13:43:11 UTC+1 schrieb Jörg Prante:
I think the write load issue has not a relation to primary/replica shard
setting. All writes should go to all shards of an index.
If you have a shard count that is not dividable by the count of nodes,
write load could be skewed, so one or more nodes have more work than
others.
This can be fixed at index creation time by setting a shard number so that
each node carries the same number of shards for an index.
Best,
Jörg
On Mon, Nov 3, 2014 at 1:32 PM, Bernd Fehling <bernd....@gmail.com
<javascript:>> wrote:
Exactly what I see, that one server which has now 2 primaries has higher
load because of two primaries writing while bulk loading.
And this is the cause why I want to have a rebalance and the primary back
to its origin to have a well distributed load.
So no chance to realize this with ES?
But if the index is generated I get a well distributed system.
Or is this just luck and generating the index several times I get always
a different result?
Regards,
Bernd
Am Montag, 3. November 2014 13:09:53 UTC+1 schrieb Jörg Prante:
There is nothing wrong, this is expected behavior.
It does not matter where the primaries are. Primaries and replica have
exactly the same information about cluster state and do exactly the same
amount of work. The only exception is that primary shards do the write
operation first before sending them to replicas.
I have an index with 4 shards and 1 replica so that each server has 1
primary and 1 replica.
If the system is set up everything is well distributed.
If 1 server goes down the primary is moved to one of the replicas but
if the server comes up
again the system is not rebalanced. The primary says where it is and
does not move
back to its origin server.
Any idea whats wrong, is there a setting for this?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.