Oops! SearchPhaseExecutionException[Failed to execute phase [query], all shards failed]

History: I have been testing ELK stack for a few months now off and on.
For months now I have had it logging correctly with no major issues. I
picked this back up this week and everything was logging perfectly with 1
exception: I am logging to the main partition and not the /data partition
we setup. After logging a very large log file we began to get alerts that
our disk was getting full. Today I was making a change to the
elasticsearch.yml to have logs be stored on the /data partition and now no
dashboards are visible in Kibana. Instead I get the following "! SearchPhaseExecutionException[Failed
to execute phase [query], all shards failed]"

I have reverted to how it was setup previously and logging to the main
partition which is not full:
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 24G 16G 6.4G 72% /

Looking at other posts This seems to be the starting point but I am at a
loss on where to turn now:
[elasticsearch]# curl -XGET
'http://localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 1,
"active_primary_shards" : 575,
"active_shards" : 575,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 585
}

I can not seem to grasp the concept of the shards, why they are now
unassigned, or why our elasticsearch no longer works correctly.

Any help is greatly appreciated.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/31caefbe-1cc2-4d9c-a309-f03036e1dda6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Today the Kibana interface appears to be working fine but the status is

still red.

[root@syslog1 ~]# curl -XGET
'http://localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 1,
"active_primary_shards" : 590,
"active_shards" : 590,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 600
}
[root

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1afb3e14-f845-4772-9520-08d65bd5dede%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I didn't get any help on this but as an FYI for those that may have this
issue and are just starting:

Digging deeper it appears our system was created with 5 shards and 1
replica. Granted we are only using 1 node so every day elasticsearch
would create an indice of 10 shards, 5 for the primary node and 5 for the
secondary node (which doesn't exist on our system but would for
redundancy). We made it so all future indices created have 0 replicas in
the future. I can't find a way to clean up all the unallocated shards
from previous indices without deleting the data.

If the active shards is almost = to unassigned shards you are using
replication and need to have a second node running.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a0c368f4-2550-459b-b61e-6f781477eaee%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

You should be able to set the number of replicas for all previous indexes
to 0. You cannot reduce the shard count once an index is created, or
increase for that matter. You could reindex your shards.

http://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html

curl -XPUT 'localhost:9200/my_index/_settings' -d '
{
"index" : {
"number_of_replicas" : 0
}
}'

On Thursday, March 12, 2015 at 11:35:12 AM UTC-6, Taylor Wood wrote:

I didn't get any help on this but as an FYI for those that may have this
issue and are just starting:

Digging deeper it appears our system was created with 5 shards and 1
replica. Granted we are only using 1 node so every day elasticsearch
would create an indice of 10 shards, 5 for the primary node and 5 for the
secondary node (which doesn't exist on our system but would for
redundancy). We made it so all future indices created have 0 replicas in
the future. I can't find a way to clean up all the unallocated shards
from previous indices without deleting the data.

If the active shards is almost = to unassigned shards you are using
replication and need to have a second node running.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/0b36108c-3e90-4865-85e3-d6a74b53ee97%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

The following as suggested was able to fix all my previous indexes to make
0 replication and essentially removing the 5 unassigned shards we had per
Indice.

curl -XPUT 'localhost:9200/*/_settings' -d ' { "index" : {
"number_of_replicas" : 0 } } '

Thank you.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/f4c18070-a29d-429d-87fb-c358d175e938%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.