I've encountered some unexpected behaviour during my DR testing which I'm
trying to explain.
I have a 3-node geographically-separated cluster with the following
settings:
index.number_of_shards=5
index.number_of_replicas=2
discovery.zen.minimum_master_nodes: 2
I use number_of_replicas=2 for durability, so that each node will contain a
full set to data (meaning I can lose 2 of my 3 nodes without losing any
data).
However I am finding that if I shut down 2 nodes, after adjusting
minimum_master_nodes on the remaining node to 1 and restarting that node,
the cluster stays yellow with all shards unassigned. They remain in the
unassigned state until I manually reduce number_of_replicas to down 1 or 0.
Once number_of_replicas <= number of nodes, the shards reassign and the
cluster goes green. Just wondering if this behaviour is as designed?
I've encountered some unexpected behaviour during my DR testing which I'm
trying to explain.
I have a 3-node geographically-separated cluster with the following
settings:
index.number_of_shards=5
index.number_of_replicas=2
discovery.zen.minimum_master_nodes: 2
I use number_of_replicas=2 for durability, so that each node will contain
a full set to data (meaning I can lose 2 of my 3 nodes without losing any
data).
However I am finding that if I shut down 2 nodes, after adjusting
minimum_master_nodes on the remaining node to 1 and restarting that node,
the cluster stays yellow with all shards unassigned. They remain in the
unassigned state until I manually reduce number_of_replicas to down 1 or
0. Once number_of_replicas <= number of nodes, the shards reassign and the
cluster goes green. Just wondering if this behaviour is as designed?
I understand there's no point assigning primaries and replicas to a
single node, but in my case ES won't even allocate a primary (until I
reduce the number of replicas)
On Wednesday, January 7, 2015 4:58:58 PM UTC+13, Mark Walkom wrote:
It's not recommended to run an Elasticsearch cluster across geographically
dispersed locations.
You cannot assign both primaries and replicas to a single node, it defeats
the purpose! So it's as design.
I've encountered some unexpected behaviour during my DR testing which I'm
trying to explain.
I have a 3-node geographically-separated cluster with the following
settings:
index.number_of_shards=5
index.number_of_replicas=2
discovery.zen.minimum_master_nodes: 2
I use number_of_replicas=2 for durability, so that each node will contain
a full set to data (meaning I can lose 2 of my 3 nodes without losing any
data).
However I am finding that if I shut down 2 nodes, after adjusting
minimum_master_nodes on the remaining node to 1 and restarting that node,
the cluster stays yellow with all shards unassigned. They remain in the
unassigned state until I manually reduce number_of_replicas to down 1 or
0. Once number_of_replicas <= number of nodes, the shards reassign and the
cluster goes green. Just wondering if this behaviour is as designed?
I understand there's no point assigning primaries and replicas to a
single node, but in my case ES won't even allocate a primary (until I
reduce the number of replicas)
On Wednesday, January 7, 2015 4:58:58 PM UTC+13, Mark Walkom wrote:
It's not recommended to run an Elasticsearch cluster across
geographically dispersed locations.
You cannot assign both primaries and replicas to a single node, it
defeats the purpose! So it's as design.
I've encountered some unexpected behaviour during my DR testing which
I'm trying to explain.
I have a 3-node geographically-separated cluster with the following
settings:
index.number_of_shards=5
index.number_of_replicas=2
discovery.zen.minimum_master_nodes: 2
I use number_of_replicas=2 for durability, so that each node will
contain a full set to data (meaning I can lose 2 of my 3 nodes without
losing any data).
However I am finding that if I shut down 2 nodes, after adjusting
minimum_master_nodes on the remaining node to 1 and restarting that node,
the cluster stays yellow with all shards unassigned. They remain in the
unassigned state until I manually reduce number_of_replicas to down 1 or
0. Once number_of_replicas <= number of nodes, the shards reassign and the
cluster goes green. Just wondering if this behaviour is as designed?
So you're correct that the cluster cannot be yellow with primaries
unassigned. However would still be good to know why ES would refuse to
allocate primary shards if the number of replicas exceeds the number of
nodes.
Cheers,
Mat
On Thursday, January 8, 2015 10:03:51 AM UTC+13, Mark Walkom wrote:
A cluster cannot be yellow if any primaries are unassigned. Are you sure
it's yellow before you set replica's to 0?
I understand there's no point assigning primaries and replicas to a
single node, but in my case ES won't even allocate a primary (until I
reduce the number of replicas)
On Wednesday, January 7, 2015 4:58:58 PM UTC+13, Mark Walkom wrote:
It's not recommended to run an Elasticsearch cluster across
geographically dispersed locations.
You cannot assign both primaries and replicas to a single node, it
defeats the purpose! So it's as design.
I've encountered some unexpected behaviour during my DR testing which
I'm trying to explain.
I have a 3-node geographically-separated cluster with the following
settings:
index.number_of_shards=5
index.number_of_replicas=2
discovery.zen.minimum_master_nodes: 2
I use number_of_replicas=2 for durability, so that each node will
contain a full set to data (meaning I can lose 2 of my 3 nodes without
losing any data).
However I am finding that if I shut down 2 nodes, after adjusting
minimum_master_nodes on the remaining node to 1 and restarting that node,
the cluster stays yellow with all shards unassigned. They remain in the
unassigned state until I manually reduce number_of_replicas to down 1 or
0. Once number_of_replicas <= number of nodes, the shards reassign and the
cluster goes green. Just wondering if this behaviour is as designed?
So you're correct that the cluster cannot be yellow with primaries
unassigned. However would still be good to know why ES would refuse to
allocate primary shards if the number of replicas exceeds the number of
nodes.
Cheers,
Mat
On Thursday, January 8, 2015 10:03:51 AM UTC+13, Mark Walkom wrote:
A cluster cannot be yellow if any primaries are unassigned. Are you sure
it's yellow before you set replica's to 0?
I understand there's no point assigning primaries and replicas to a
single node, but in my case ES won't even allocate a primary (until I
reduce the number of replicas)
On Wednesday, January 7, 2015 4:58:58 PM UTC+13, Mark Walkom wrote:
It's not recommended to run an Elasticsearch cluster across
geographically dispersed locations.
You cannot assign both primaries and replicas to a single node, it
defeats the purpose! So it's as design.
I've encountered some unexpected behaviour during my DR testing which
I'm trying to explain.
I have a 3-node geographically-separated cluster with the following
settings:
index.number_of_shards=5
index.number_of_replicas=2
discovery.zen.minimum_master_nodes: 2
I use number_of_replicas=2 for durability, so that each node will
contain a full set to data (meaning I can lose 2 of my 3 nodes without
losing any data).
However I am finding that if I shut down 2 nodes, after adjusting
minimum_master_nodes on the remaining node to 1 and restarting that node,
the cluster stays yellow with all shards unassigned. They remain in the
unassigned state until I manually reduce number_of_replicas to down 1 or
0. Once number_of_replicas <= number of nodes, the shards reassign and the
cluster goes green. Just wondering if this behaviour is as designed?
So you're correct that the cluster cannot be yellow with primaries
unassigned. However would still be good to know why ES would refuse to
allocate primary shards if the number of replicas exceeds the number of
nodes.
Cheers,
Mat
On Thursday, January 8, 2015 10:03:51 AM UTC+13, Mark Walkom wrote:
A cluster cannot be yellow if any primaries are unassigned. Are you sure
it's yellow before you set replica's to 0?
I understand there's no point assigning primaries and replicas to a
single node, but in my case ES won't even allocate a primary (until I
reduce the number of replicas)
On Wednesday, January 7, 2015 4:58:58 PM UTC+13, Mark Walkom wrote:
It's not recommended to run an Elasticsearch cluster across
geographically dispersed locations.
You cannot assign both primaries and replicas to a single node, it
defeats the purpose! So it's as design.
I've encountered some unexpected behaviour during my DR testing which
I'm trying to explain.
I have a 3-node geographically-separated cluster with the following
settings:
index.number_of_shards=5
index.number_of_replicas=2
discovery.zen.minimum_master_nodes: 2
I use number_of_replicas=2 for durability, so that each node will
contain a full set to data (meaning I can lose 2 of my 3 nodes without
losing any data).
However I am finding that if I shut down 2 nodes, after adjusting
minimum_master_nodes on the remaining node to 1 and restarting that node,
the cluster stays yellow with all shards unassigned. They remain in the
unassigned state until I manually reduce number_of_replicas to down 1 or
0. Once number_of_replicas <= number of nodes, the shards reassign and the
cluster goes green. Just wondering if this behaviour is as designed?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.