ELasticsearch unassigned node removal


(Dazith Kj) #1

HI All,

I have configured an elastic search cluster with one node. But After I started the cluster with the single node. I am seeing an unassigned node. Please help me to sort this off.

NOTE: This ES cluster only having one node.

ES config file :

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
# cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
# node.name: node-1
#
# Add custom attributes to the node:
#
# node.rack: r1
#index.routing.allocation.total_shards_per_node = -1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.memory_lock: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
 network.host: 192.168.200.42
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true

Version info

{
"name": "Plunderer",
"cluster_name": "elasticsearch",
"cluster_uuid": "9LjKXnnbRD-deirsIlx-zQ",
"version": {
"number": "2.4.6",
"build_hash": "5376dca9f70f3abef96a77f4bb22720ace8240fd",
"build_timestamp": "2017-07-18T12:17:44Z",
"build_snapshot": false,
"lucene_version": "5.5.4"
},
"tagline": "You Know, for Search"
}

(Shane Connelly) #2

It looks like you're seeing unassigned replica shards here. That is, Filebeat by default tells Elasticsearch to have 1 replica of each of the shards in the index. Elasticsearch tries to make sure replica shards aren't on the same node as the primary shards so that if you lose a node you don't lose data. In the case that you only have 1 node, there's no "safe" place for Elasticsearch to allocate replicas to, so it leaves them unassigned. That's what you're seeing here. If you want the cluster to be green, you'll have to either set it to 0 replicas (which will mean you could experience data loss if anything happens to the node) or add a second node, after which Elasticsearch will allocate those unassigned shards to the second node. You can read more about this here.

Also, you're running a very old (2.4) version of Elasticsearch. You may want to consider running a more recent version (we're up to 6.2)


(andy_zhou) #3

you only have one node?
the replaciton can't in the same node. add node in the cluster.


(Dazith Kj) #4

Thanks @shanec. As you said below is my cluster health status.

 {
    "cluster_name": "elasticsearch",
    "status": "yellow",
    "timed_out": false,
    "number_of_nodes": 1,
    "number_of_data_nodes": 1,
    "active_primary_shards": 6,
    "active_shards": 6,
    "relocating_shards": 0,
    "initializing_shards": 0,
    "unassigned_shards": 6,
    "delayed_unassigned_shards": 0,
    "number_of_pending_tasks": 0,
    "number_of_in_flight_fetch": 0,
    "task_max_waiting_in_queue_millis": 0,
    "active_shards_percent_as_number": 50
}

Can you help me to remove unassigned_shards ? :slight_smile:


(Christian Dahlqvist) #5

As you only have one node, Elasticsearch can not assign any of the replicas (Elasticsearch will never put primary and replica on the same node). You can update index settings and set number_of_replicas to 0 to get rid of this.


(Dazith Kj) #6

Thanks All,

From below curl command I would able to sort out the issue.

curl -XPUT 192.168.200.42:9200/_template/zeroreplicas -d '
{
"template" : "*",
"settings" : {
"number_of_replicas" : 0
}
}'

(system) #7

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.