Multiple nodes on same machine : replicas?

Hello, everyone.

I'm trying to set up a cluster of 10 nodes, distributed on five different
computers.
Each computer have 2 full installation of ES v 0.90.5.
All indices on this cluster have 5 shards, 1 replica

I want to make sure that a replica will never be on the same computer as
its primary, because a computer crash is far more common than an ES crash.
I'm currently trying a method using "node.machine: aValue" and
"cluster.routing.allocation.awareness.attributes: machine" in the
configuration of my nodes, where aValue is different on each machine.

According to this :

and according to the doc for 0.90.5 as well, this should ensure that I get
what I want.
My problem is that, when creating my indices, all primary shards get
allocated on the two nodes of a single machine, and no replica gets
created, thus resulting in a yellow state.

Is there a way to have the shards distributed evenly through all nodes,
while ensuring that my replicas will never be on the same computer as their
primaries?

Thanks for any infos on that matter.

--
Please update your bookmarks! We moved to https://discuss.elastic.co/

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/3c839191-c0e9-41ce-a749-869c76cec37b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Have you tried upgrading? That's a super old version which may not help.

Though that is the right setting.
On 06/05/2015 3:24 pm, "DH" ciddp195@gmail.com wrote:

Hello, everyone.

I'm trying to set up a cluster of 10 nodes, distributed on five different
computers.
Each computer have 2 full installation of ES v 0.90.5.
All indices on this cluster have 5 shards, 1 replica

I want to make sure that a replica will never be on the same computer as
its primary, because a computer crash is far more common than an ES crash.
I'm currently trying a method using "node.machine: aValue" and
"cluster.routing.allocation.awareness.attributes: machine" in the
configuration of my nodes, where aValue is different on each machine.

According to this :
Shard allocation awareness (rack aware, zone aware, for example) · Issue #1352 · elastic/elasticsearch · GitHub

and according to the doc for 0.90.5 as well, this should ensure that I
get what I want.
My problem is that, when creating my indices, all primary shards get
allocated on the two nodes of a single machine, and no replica gets
created, thus resulting in a yellow state.

Is there a way to have the shards distributed evenly through all nodes,
while ensuring that my replicas will never be on the same computer as their
primaries?

Thanks for any infos on that matter.

--
Please update your bookmarks! We moved to https://discuss.elastic.co/

You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/3c839191-c0e9-41ce-a749-869c76cec37b%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/3c839191-c0e9-41ce-a749-869c76cec37b%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
Please update your bookmarks! We moved to https://discuss.elastic.co/

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X_OHNYbr0sZJXCghGjDc_uw2kCpLSGACE5OnMJxwaEKFw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

afaik, you don't change the cluster.routing .* , es 0.90 should be
intelligent enough to redistribute the shards evenly across the nodes in
the cluster.

hth

jason

On Wed, May 6, 2015 at 3:24 PM, DH ciddp195@gmail.com wrote:

Hello, everyone.

I'm trying to set up a cluster of 10 nodes, distributed on five different
computers.
Each computer have 2 full installation of ES v 0.90.5.
All indices on this cluster have 5 shards, 1 replica

I want to make sure that a replica will never be on the same computer as
its primary, because a computer crash is far more common than an ES crash.
I'm currently trying a method using "node.machine: aValue" and
"cluster.routing.allocation.awareness.attributes: machine" in the
configuration of my nodes, where aValue is different on each machine.

According to this :
Shard allocation awareness (rack aware, zone aware, for example) · Issue #1352 · elastic/elasticsearch · GitHub

and according to the doc for 0.90.5 as well, this should ensure that I
get what I want.
My problem is that, when creating my indices, all primary shards get
allocated on the two nodes of a single machine, and no replica gets
created, thus resulting in a yellow state.

Is there a way to have the shards distributed evenly through all nodes,
while ensuring that my replicas will never be on the same computer as their
primaries?

Thanks for any infos on that matter.

--
Please update your bookmarks! We moved to https://discuss.elastic.co/

You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/3c839191-c0e9-41ce-a749-869c76cec37b%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/3c839191-c0e9-41ce-a749-869c76cec37b%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
Please update your bookmarks! We moved to https://discuss.elastic.co/

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHO4itwpOzhnyhJnN6Zjnv2VxkdG2844_K54BQFmkJZVF8O_mA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Why do you want to have 2 nodes per machine?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 6 mai 2015 à 13:45, Jason Wee peichieh@gmail.com a écrit :

afaik, you don't change the cluster.routing .* , es 0.90 should be intelligent enough to redistribute the shards evenly across the nodes in the cluster.

hth

jason

On Wed, May 6, 2015 at 3:24 PM, DH ciddp195@gmail.com wrote:
Hello, everyone.

I'm trying to set up a cluster of 10 nodes, distributed on five different computers.
Each computer have 2 full installation of ES v 0.90.5.
All indices on this cluster have 5 shards, 1 replica

I want to make sure that a replica will never be on the same computer as its primary, because a computer crash is far more common than an ES crash.
I'm currently trying a method using "node.machine: aValue" and "cluster.routing.allocation.awareness.attributes: machine" in the configuration of my nodes, where aValue is different on each machine.

According to this :
Shard allocation awareness (rack aware, zone aware, for example) · Issue #1352 · elastic/elasticsearch · GitHub

and according to the doc for 0.90.5 as well, this should ensure that I get what I want.
My problem is that, when creating my indices, all primary shards get allocated on the two nodes of a single machine, and no replica gets created, thus resulting in a yellow state.

Is there a way to have the shards distributed evenly through all nodes, while ensuring that my replicas will never be on the same computer as their primaries?

Thanks for any infos on that matter.

Please update your bookmarks! We moved to https://discuss.elastic.co/

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/3c839191-c0e9-41ce-a749-869c76cec37b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
Please update your bookmarks! We moved to https://discuss.elastic.co/

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHO4itwpOzhnyhJnN6Zjnv2VxkdG2844_K54BQFmkJZVF8O_mA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

--
Please update your bookmarks! We moved to https://discuss.elastic.co/

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/DD660B6A-867A-42FA-848E-74ABAFD96ABB%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.