I'm trying to set up a cluster of 10 nodes, distributed on five different
computers.
Each computer have 2 full installation of ES v 0.90.5.
All indices on this cluster have 5 shards, 1 replica
I want to make sure that a replica will never be on the same computer as
its primary, because a computer crash is far more common than an ES crash.
I'm currently trying a method using "node.machine: aValue" and
"cluster.routing.allocation.awareness.attributes: machine" in the
configuration of my nodes, where aValue is different on each machine.
According to this :
and according to the doc for 0.90.5 as well, this should ensure that I get
what I want.
My problem is that, when creating my indices, all primary shards get
allocated on the two nodes of a single machine, and no replica gets
created, thus resulting in a yellow state.
Is there a way to have the shards distributed evenly through all nodes,
while ensuring that my replicas will never be on the same computer as their
primaries?
Have you tried upgrading? That's a super old version which may not help.
Though that is the right setting.
On 06/05/2015 3:24 pm, "DH" ciddp195@gmail.com wrote:
Hello, everyone.
I'm trying to set up a cluster of 10 nodes, distributed on five different
computers.
Each computer have 2 full installation of ES v 0.90.5.
All indices on this cluster have 5 shards, 1 replica
I want to make sure that a replica will never be on the same computer as
its primary, because a computer crash is far more common than an ES crash.
I'm currently trying a method using "node.machine: aValue" and
"cluster.routing.allocation.awareness.attributes: machine" in the
configuration of my nodes, where aValue is different on each machine.
and according to the doc for 0.90.5 as well, this should ensure that I
get what I want.
My problem is that, when creating my indices, all primary shards get
allocated on the two nodes of a single machine, and no replica gets
created, thus resulting in a yellow state.
Is there a way to have the shards distributed evenly through all nodes,
while ensuring that my replicas will never be on the same computer as their
primaries?
afaik, you don't change the cluster.routing .* , es 0.90 should be
intelligent enough to redistribute the shards evenly across the nodes in
the cluster.
I'm trying to set up a cluster of 10 nodes, distributed on five different
computers.
Each computer have 2 full installation of ES v 0.90.5.
All indices on this cluster have 5 shards, 1 replica
I want to make sure that a replica will never be on the same computer as
its primary, because a computer crash is far more common than an ES crash.
I'm currently trying a method using "node.machine: aValue" and
"cluster.routing.allocation.awareness.attributes: machine" in the
configuration of my nodes, where aValue is different on each machine.
and according to the doc for 0.90.5 as well, this should ensure that I
get what I want.
My problem is that, when creating my indices, all primary shards get
allocated on the two nodes of a single machine, and no replica gets
created, thus resulting in a yellow state.
Is there a way to have the shards distributed evenly through all nodes,
while ensuring that my replicas will never be on the same computer as their
primaries?
afaik, you don't change the cluster.routing .* , es 0.90 should be intelligent enough to redistribute the shards evenly across the nodes in the cluster.
hth
jason
On Wed, May 6, 2015 at 3:24 PM, DH ciddp195@gmail.com wrote:
Hello, everyone.
I'm trying to set up a cluster of 10 nodes, distributed on five different computers.
Each computer have 2 full installation of ES v 0.90.5.
All indices on this cluster have 5 shards, 1 replica
I want to make sure that a replica will never be on the same computer as its primary, because a computer crash is far more common than an ES crash.
I'm currently trying a method using "node.machine: aValue" and "cluster.routing.allocation.awareness.attributes: machine" in the configuration of my nodes, where aValue is different on each machine.
and according to the doc for 0.90.5 as well, this should ensure that I get what I want.
My problem is that, when creating my indices, all primary shards get allocated on the two nodes of a single machine, and no replica gets created, thus resulting in a yellow state.
Is there a way to have the shards distributed evenly through all nodes, while ensuring that my replicas will never be on the same computer as their primaries?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.