New to the inner workings of ES and stumbled on an error when testing a change in our staging environment. Before the change we had 3 master nodes (not in data mode) and 2 data nodes. We're now expanding the data mode to the master nodes plus one more data node (total of 6 data nodes). The change worked for the most part, but ran into this error where a single shard could not be allocated for one of our clusters (two clusters present, one is green and one yellow):
{
"index" : "logstash-2020.11.01",
"shard" : 2,
"primary" : false,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2021-03-01T22:56:52.718Z",
"last_allocation_status" : "no_attempt"
},
"can_allocate" : "no",
"allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions" : [
{
"node_id" : "1uBg89USRmGOPODA6erWZA",
"node_name" : "log-db-master-2-self-monitoring",
"transport_address" : "10.208.68.12:9302",
"node_attributes" : {
"xpack.installed" : "true",
"transform.node" : "false"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index setting [index.routing.allocation.require] filters [_name:\"log-db-data-1-self-monitoring\"]"
}
]
},
{
"node_id" : "3g5Qg0TJRzmBNEp9OuJgig",
"node_name" : "log-db-master-1-self-monitoring",
"transport_address" : "10.208.68.10:9302",
"node_attributes" : {
"xpack.installed" : "true",
"transform.node" : "false"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index setting [index.routing.allocation.require] filters [_name:\"log-db-data-1-self-monitoring\"]"
}
]
},
{
"node_id" : "5TI76CnnQvWJ9O5X5UYpPA",
"node_name" : "log-db-data-1-self-monitoring",
"transport_address" : "10.208.68.18:9302",
"node_attributes" : {
"xpack.installed" : "true",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "a copy of this shard is already allocated to this node [[logstash-2020.11.01][2], node[5TI76CnnQvWJ9O5X5UYpPA], [P], s[STARTED], a[id=qJTT2sPERLWcph
pu_jnyHg]]"
}
]
},
{
"node_id" : "BlrC83-FQmGLXFuyhcvlRw",
"node_name" : "log-db-data-2-self-monitoring",
"transport_address" : "10.208.68.20:9302",
"node_attributes" : {
"xpack.installed" : "true",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index setting [index.routing.allocation.require] filters [_name:\"log-db-data-1-self-monitoring\"]"
}
]
},
{
"node_id" : "RrNLsPvuQ8uX-UpjoETaDw",
"node_name" : "log-db-master-3-self-monitoring",
"transport_address" : "10.208.68.14:9302",
"node_attributes" : {
"xpack.installed" : "true",
"transform.node" : "false"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index setting [index.routing.allocation.require] filters [_name:\"log-db-data-1-self-monitoring\"]"
}
]
},
{
"node_id" : "zBEgUnvnQnKWER5R94qk5A",
"node_name" : "log-db-data-0-self-monitoring",
"transport_address" : "10.208.68.16:9302",
"node_attributes" : {
"xpack.installed" : "true",
"transform.node" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index setting [index.routing.allocation.require] filters [_name:\"log-db-data-1-self-monitoring\"]"
}
]
}
]
}
I see the node name is different, but why would that block the shard from being allocated? It was able to take care of everything else except for this one shard.
Our elasticsearch.yml file for one of our data nodes:
# Managed by Ansible, do not edit directly.
# Node and port settings based on node type
node.name: log-db-data-1-self-monitoring
path.data: /data/log-db-self-monitoring/data
node.roles: [ data, ingest, remote_cluster_client, transform ]
http.port: 9202
transport.port: 9302
path.logs: /var/log/elasticsearch
# Network settings
network.host: 10.208.68.18
# Recovery settings
gateway.expected_data_nodes: 2
gateway.recover_after_time: 30s
gateway.recover_after_data_nodes: 1
discovery.seed_hosts: ["10.208.68.10:9302","10.208.68.12:9302","10.208.68.14:9302","10.208.68.16:9302","10.208.68.18:9302","10.208.68.20:9302",]
# Cluster settings
cluster.name: log-db-self-monitoring-dal13
cluster.initial_master_nodes: ["log-db-master-1-self-monitoring","log-db-master-2-self-monitoring","log-db-master-3-self-monitoring",]
[xpack stuff deelted for brevity here]