Removing a node

Attempting to remove a node from a 5.6 cluster and did:

# cat /tmp/rmnode
{
  "transient" :{
      "cluster.routing.allocation.exclude._ip" : "10.3.3.12"
   }
}
# curl -s -XPUT 10.3.3.12:9200/_cluster/settings -H 'Content-Type: application/json' -d @/tmp/rmnode | jq .; echo
{
  "acknowledged": true,
  "persistent": {},
  "transient": {
    "cluster": {
      "routing": {
       "allocation": {
          "exclude": {
            "_ip": "10.3.3.12"
          }
        }
      }
    }
  }
}

reallocation also started but it now seems to have stop and still have 15 shards on node:

# curl -s -XGET 10.3.3.12:9200/_cluster/health?pretty |  jq .
{
  "cluster_name": "mx9es",
  "status": "green",
  "timed_out": false,
  "number_of_nodes": 5,
  "number_of_data_nodes": 4,
  "active_primary_shards": 140,
  "active_shards": 280,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 0,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 100
}

# curl -s -XGET '10.3.3.12:9200/_cat/allocation?v'
shards disk.indices disk.used disk.avail disk.total disk.percent host         ip           node
    88      241.8gb   242.6gb    202.6gb    445.2gb           54 10.3.3.11    10.3.3.11    cbdB
    15       38.3gb      39gb    406.2gb    445.2gb            8 10.3.3.12    10.3.3.12    cbdC
    89      249.2gb   250.2gb    634.2gb    884.4gb           28 10.3.3.13    10.3.3.13    cbdD
    88        272gb   273.2gb    175.8gb    449.1gb           60 10.3.3.10    10.3.3.10 cbdA

How to get the last 15 shards to move elsewhere?

Patched all nodes to latest 5.6.10 and restarted them one by one seemed to finish evacuating the last shards :slight_smile:

Now I can reploy removed node...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.