Preventing cluster Health (status) from being RED

(Siddharth Gupta) #1


I have recently started using elastic search in my application and facing the following issues::

  1. Master Node Not Discovered
  2. Cluster Block Exception

Quite frequently I notice that the health of my cluster is RED (All primary shards are missing).
Because of which neither I am able to connect to the cluster nor able to index my data.
Where do the shards go ?

Can anybody help me with this issue ?

These are the stats of my cluster::

INDEX NAME ==> index_temp12_siddharth_vectors
NODE NAME ==> Sid_node_vectors_1
(It has now changed from red to yellow, but how do I stop it from going back to red ? How Can I handle the unassigned shards so that they start initializing ?)
"cluster_name" : "siddharth_cluster_vectors_1",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 7,
"active_shards" : 7,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 7,
"number_of_pending_tasks" : 0

Below are the stats of my shards ::

.marvel-2015.08.18 0 p STARTED 17934 13.1mb Sid_node_vectors_1
.marvel-2015.08.18 0 r UNASSIGNED
index_temp12_siddharth_vectors 2 p STARTED 201 184.4mb Sid_node_vectors_1
index_temp12_siddharth_vectors 2 r UNASSIGNED
index_temp12_siddharth_vectors 0 p STARTED 200 182.4mb Sid_node_vectors_1
index_temp12_siddharth_vectors 0 r UNASSIGNED
index_temp12_siddharth_vectors 3 p STARTED 201 155.2mb Sid_node_vectors_1
index_temp12_siddharth_vectors 3 r UNASSIGNED
index_temp12_siddharth_vectors 1 p STARTED 199 142.5mb Sid_node_vectors_1
index_temp12_siddharth_vectors 1 r UNASSIGNED
index_temp12_siddharth_vectors 4 p STARTED 200 156.9mb Sid_node_vectors_1
index_temp12_siddharth_vectors 4 r UNASSIGNED
.marvel-2015.08.19 0 p STARTED 236 948.2kb Sid_node_vectors_1
.marvel-2015.08.19 0 r UNASSIGNED



we had the same Problem some days ago. Please check the elasticsearch.log and the free space of the filesystem containing your data.

ES doesn't start replication if there's too less space. We found a corresponding entry in the log:
NO(less than required [15.0%] free disk on node, free: [15.0%]

After we increased the filesystem ES started the replication.

Good luck!

(Siddharth Gupta) #3

Thank you for the reply!

Actually I am using elastic search as a dependency added in my pom.xml


In my elasticsearch.yml I have configured the following two paths::

  1. /home/grid/Siddharth/Project/data
    So all the indexes recide in data directory

  2. path.logs: /home/grid/Siddharth/Project/logs
    But somehow elastic search is not able to write the log filw in this particular log folder.

Can you help me out with this issue ?

(system) #4