503: SearchPhaseExecutionException[Failed to execute phase [query], all shards failed]

Hello,

I have a newbie question. What is the basic meaning of this error and where
do I go to debug it?

{
"error": "SearchPhaseExecutionException[Failed to execute phase [query],
all shards failed]",
"status": 503
}

I'm starting elasticsearch with logstash on a single machine:

/usr/bin/java -Des.path.data=/../data/ -jar /../logstash-1.2.1-flatjar.jar
agent -f /../server.conf -- web

Does there appear to be anything wrong with that cmdline?

server.conf is:

input {
udp {
port => 8070
codec => "json"
}
}
output {
elasticsearch {
embedded => true
}
}

Looks ok?

I have a couple of queries which are doing fine:

POST logstash-2013.12.03/_search
{"query":{"filtered":{"query":{"bool":{"should":[{"query_string":{"query":
"*"}}]}},"filter":{"bool":{"must":[{"match_all":{}},{"range":{"@timestamp":{
"from":1383496126260,"to":1386088126260}}},{"bool":{"must":[{"match_all"
:{}}]}}]}}}},"highlight":{"fields":{},"fragment_size":2147483647,"pre_tags"
:["@start-highlight@"],"post_tags":["@end-highlight@"]},"size":500,"sort":[{
"@timestamp":{"order":"desc"}}]}

POST logstash-2013.12.03/_search
{"facets":{"0":{"date_histogram":{"field":"@timestamp","interval":"12h"},
"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{
"query":"*"}},"filter":{"bool":{"must":[{"match_all":{}},{"range":{
"@timestamp":{"from":1383496126260,"to":1386088126260}}},{"bool":{"must":[{
"match_all":{}}]}}]}}}}}}}},"size":0}

But, then, I try to step back to the previous index (?) with a similar
search and it doesn't work:

POST logstash-2013.12.02/_search
{"facets":{"0":{"date_histogram":{"field":"@timestamp","interval":"12h"},
"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{
"query":"*"}},"filter":{"bool":{"must":[{"match_all":{}},{"range":{
"@timestamp":{"from":1383496126260,"to":1386088126260}}},{"bool":{"must":[{
"match_all":{}}]}}]}}}}}}}},"size":0}

Perhaps the problem is that I tried to "purge" at one point with something
like this:

#!/usr/bin/perl

my($days)=shift || 90;
my($STATUS)='curl -s http://localhost:9200/_status';
my($DELETE)='curl -s -XDELETE http://localhost:9200/';
my($data)=$STATUS; $data=~s/:/=>/g; $data=~s/true/1/g; $data=~s/false/0/g;
my($ds)=eval($data);
my(@indicies)=sort(keys(%{$ds->{'indices'}}));
my($index, $result);

while(@indicies>$days) {
$index=shift(@indicies);
print $index, ": ";
system("$DELETE$index");
print "\n";
}

I don't know perl; and, I don't know where this snippet came from :confused: But,
since the index in question is not >90 days old and since it appears to
still be there (albeit hobbled), I don't think this would be the problem.

Zeroth, is there anything obviously wrong with what I'm doing?
First, what is the meaning of the error and how should I remedy the current
situation?
Second, does the above perl snippet seem like a proper way to purge old
logs or is there a better way at the ready?

Thank You,
Skylar

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/00a2ab21-9514-4037-b6dc-9cf8ff92884b%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.