Cluster keeps going into yellow even after scaling out nodes (client & data)

So for some reason my cluster now seems to be constantly going into a yellow state. I've added more data & client nodes and will tell it to reassign the shards and this seems to work for a little bit but then seems to have issues. I've installed ElasticHQ to look at metrics and everything is green and appears to be working well. I have the java heap set to 12Gb on each data node and client nodes at 1gb. Are there any recommended ways to help diagnose where or what may need changing to stop the cluster from continually going into this state? Each of the machines the data & client nodes run on have 32Gb of memory and plenty of CPU. I/O doesn't appear to be the bottle neck when looking at iostat. The cluster was originally setup with 5 shards (default), and currently do not have any replicas (I'm fine if there is data loss).

Health Info:
"cluster_name": "myesdb",
"status": "yellow",
"timed_out": false,
"number_of_nodes": 35,
"number_of_data_nodes": 18,
"active_primary_shards": 1336,
"active_shards": 2671,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 1,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 99.9625748502994

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.