unassigned shards cluster_recovered
i am using elasticsearch 6.1
i have each indices
number_of_shards : 2
number_of_replicas : 3
i have checked the memory and storege everything look ok
unassigned shards cluster_recovered
i am using elasticsearch 6.1
i have each indices
number_of_shards : 2
number_of_replicas : 3
i have checked the memory and storege everything look ok
The allocation explain API should be the first thing you try if you want to explain why some shards are unassigned. If you need help interpreting the output please copy it here.
curl -XGET 'http://localhost:9200/_cluster/allocation/explain?pretty'
{
"index" : "25bdefghjklmprty_store_1",
"shard" : 2,
"primary" : false,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2019-03-11T13:15:29.449Z",
"last_allocation_status" : "no_attempt"
},
"can_allocate" : "no",
"allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions" : [
{
"node_id" : "3chymNCZQU6TB6Aegev0bA",
"node_name" : "node_1",
"transport_address" : "127.0.0.1:9300",
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index setting [index.routing.allocation.include] filters [rack:"node_16",_name:"node_16",size:"big"]"
}
]
},
{
"node_id" : "XLkUsRd3RfSpSt1pEI774Q",
"node_name" : "node_16",
"transport_address" : "127.0.0.1:9301",
"node_attributes" : {
"size" : "big",
"rack" : "node_16"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[25bdefghjklmprty_store_1][2], node[XLkUsRd3RfSpSt1pEI774Q], [P], s[STARTED], a[id=YZaIAVK2SrWLQsHhT2QdoA]]"
}
]
}
]
}
Hello
i am waiting for your reply
It looks like you are using shard allocation filtering/awareness. If so, how is this configured? How many nodes do you have in the cluster? How are these tagged with attributes?
Hello there are many nodes in the server and it configure using the tagged and attribute with name of the node
like node_16 and size: big etc
This looks wrong. You have a shard allocation filter which wants to allocate this shard to a node in a rack called node_16
. That's a strange name for a rack. I think this is a mistake.
Also you're trying to allocate this shard to a node whose name is node_16
. That has worked, node_16
has the primary of this shard, but the replica cannot be allocated because there's no other node called node_16
.
Hello
i think rack is fine because i am creating a rack for each node so each node have it's own rack
node_16 is node that available because other indices are there and working fine
Sure, but you can't allocate the primary and the replica to node 16:
"explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[25bdefghjklmprty_store_1][2], node[XLkUsRd3RfSpSt1pEI774Q], [P], s[STARTED], a[id=YZaIAVK2SrWLQsHhT2QdoA]]"
hello actualy this is question that i am asking to you ?
I do not understand the question. You seem to be asking why the replica is unassigned. It's unassigned because you have instructed Elasticsearch to allocate all shard copies to the same node, but it doesn't make sense to allocate more than one copy to each node, so Elasticsearch is leaving the replica unassigned.
You can fix this by:
"number_of_replicas": 0
on this indexPerhaps I'm misunderstanding you. If so, can you explain at length what you're trying to do and what you're expecting to happen with this configuration.
Hello
i want to have many node and i want to keep the
primary : 2
replica : 5
so my node never fail to deliver the result
what configiration i need to manage this?
Just remove the allocation filters and let Elasticsearch decide where to allocate the shards itself.
so i have to remove all three filters
_name , rack and size ?
Maybe you can leave the size
one, I don't know, it depends on many details of your cluster that you have not shared. But the node
and rack
ones definitely look wrong to me.
do you means _name ?
Yes, sorry, I meant _name
.
Hello
these setting allow indices to assign on specific place
but if we remove this then it can be assing anywhere
so what if it assing to some randome node and then we delete the node so we have data lose
correct me if i am wrong
It's correct that Elasticsearch will choose where to assign the shards of this index, but it is not correct that losing a node will lead to data loss. Elasticsearch will make sure that there is a replica on a different node that can take over if necessary.
What is method to delete the node that follow as you explain
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.