Why one node has no shard at all

One index was created as Unassigned:

/localhost:9200/_cat/shards?v
index                        shard prirep state       docs   store ip            node
.kibana_task_manager_1       0     p      STARTED        2  46.1kb 192.168.96.11 es03
.kibana_task_manager_1       0     r      STARTED        2  46.1kb 192.168.96.5  es05
.security-7                  0     p      STARTED       36  74.7kb 192.168.96.5  es05
.security-7                  0     r      STARTED       36  77.8kb 192.168.96.3  es01
kibana_sample_data_ecommerce 0     p      STARTED     4675   4.9mb 192.168.96.11 es03
kibana_sample_data_ecommerce 0     r      STARTED     4675   4.8mb 192.168.96.9  es02
kibana_sample_data_flights   0     p      STARTED    13059   6.2mb 192.168.96.9  es02
kibana_sample_data_flights   0     r      UNASSIGNED
.kibana_1                    0     p      STARTED      118 985.2kb 192.168.96.9  es02
.kibana_1                    0     r      STARTED      118   978kb 192.168.96.8  es06
test123                      0     p      UNASSIGNED
test123                      0     r      UNASSIGNED



/localhost:9200/_cat/allocation?v
shards disk.indices disk.used disk.avail disk.total disk.percent host          ip            node
     0           0b    32.2gb    163.6gb    195.8gb           16 192.168.96.10 192.168.96.10 es04
     2        4.9mb    32.2gb    163.6gb    195.8gb           16 192.168.96.11 192.168.96.11 es03
     1        978kb    32.2gb    163.6gb    195.8gb           16 192.168.96.8  192.168.96.8  es06
     3       12.1mb    32.2gb    163.6gb    195.8gb           16 192.168.96.9  192.168.96.9  es02
     2      120.9kb    32.2gb    163.6gb    195.8gb           16 192.168.96.5  192.168.96.5  es05
     1       77.8kb    32.2gb    163.6gb    195.8gb           16 192.168.96.3  192.168.96.3  es01
     3                                                                                       UNASSIGNED



ip            heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.96.5            33          99   3    0.29    0.45     0.68 dilm      -      es05
192.168.96.10           32          99   3    0.29    0.45     0.68 dilm      -      es04
192.168.96.3            39          99   3    0.29    0.45     0.68 dilm      -      es01
192.168.96.8            27          99   3    0.29    0.45     0.68 dilm      -      es06
192.168.96.11           36          99   3    0.29    0.45     0.68 dilm      -      es03
192.168.96.9            41          99   3    0.29    0.45     0.68 dilm      *      es02

Cluster setting:

/localhost:9200/_cluster/settings?pretty
{
  "persistent" : { },
  "transient" : {
    "cluster" : {
      "routing" : {
        "allocation" : {
          "exclude" : {
            "rack_id" : "rack_1,rack_3,rack_2"
          }
        }
      }
    }
  }
}

Node (es04) setting:

  "nodes" : {
    "PtXX3FXcR66MoOvkZKm3gg" : {
      "name" : "es04",
      "transport_address" : "192.168.96.10:9300",
      "host" : "192.168.96.10",
      "ip" : "192.168.96.10",
      "version" : "7.4.0",
      "build_flavor" : "default",
      "build_type" : "docker",
      "build_hash" : "22e1767283e61a198cb4db791ea66e3f11ab9910",
      "total_indexing_buffer" : 51897958,
      "roles" : [
        "ingest",
        "master",
        "data",
        "ml"
      ],
      "attributes" : {
        "rack_id" : "rack_1",
        "ml.machine_memory" : "15692754944",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true"
      },
      "settings" : {
        "cluster" : {
          "initial_master_nodes" : "es01,es02,es03",
          "name" : "docker-cluster",
          "election" : {
            "strategy" : "supports_voting_only"
          }
        },
        "node" : {
          "attr" : {
            "rack_id" : "rack_1",
            "xpack" : {
              "installed" : "true"
            },
            "ml" : {
              "machine_memory" : "15692754944",
              "max_open_jobs" : "20"
            }
          },
          "name" : "es04"
        },
        "path" : {
          "logs" : "/usr/share/elasticsearch/logs",
          "home" : "/usr/share/elasticsearch",
          "repo" : [
            "/snapshot"
          ]
        },
        "discovery" : {
          "seed_hosts" : "es01,es02,es03"
        },
        "client" : {
          "type" : "node"
        },
        "http" : {
          "cors" : {
            "allow-origin" : "\"*\"",
            "allow-headers" : "'X-Requested-With, X-Auth-Token, Content-Type, Content-Length, Authorization, Access-Control-Allow-Headers, Accept'",
            "allow-credentials" : "true",
            "enabled" : "true"
          },
          "compression" : "false",
          "type" : "security4",
          "type.default" : "netty4"
        },
        "bootstrap" : {
          "memory_lock" : "true"
        },
        "transport" : {
          "type" : "security4",
          "features" : {
            "x-pack" : "true"
          },
          "type.default" : "netty4"
        },
        "xpack" : {
          "license" : {
            "self_generated" : {
              "type" : "trial"
            }
          },
          "security" : {
            "http" : {
              "ssl" : {
                "enabled" : "true"
              }
            },
            "enabled" : "true",
            "transport" : {
              "ssl" : {
                "enabled" : "true"
              }
            }
          }
        },
        "network" : {
          "host" : "0.0.0.0"
        }
      },
      "os" : {
        "refresh_interval_in_millis" : 1000,
        "name" : "Linux",
        "pretty_name" : "CentOS Linux 7 (Core)",
        "arch" : "amd64",
        "version" : "4.15.0-65-generic",
        "available_processors" : 8,
        "allocated_processors" : 8
      },
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 1,
        "mlockall" : true
      },

Why? Thanks

The allocation explain API is the tool of choice for questions like this.

That tells me the reason, very nice, thanks!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.