Unassigned shards, v2

Hi there,
few weeks ago I had a problem with some unassigned shards, where I had same
number of unassigned and assigned and I solved that thanks to an advice (
herehttps://groups.google.com/forum/#!searchin/elasticsearch/unassigned$20shards/elasticsearch/Y2QQ-G0hICM/weIznt5PkKQJ)
with adding a new node.
But now it appeared another problem. I had unassigned shards even 2 nodes
were running. So I decided to turn off the replica. That caused
disappearing of half shards (of course), but some unassigned were still
remaining.

So I tried to add few new nodes. I ended on 10 nodes. Thanks to that some
shards disappeared, but most of them did not. But if I turn them down and
then start only one node again (it should be enough without replica), this
is the health status:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 1,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 198
}

So 1/10 was assigned and 9/10 was not. It seems the shards are still
"connected" to old nodes and I have to reroute them to the only one node.

But I couldn't do it.
I used thishttp://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html webpage
and following code:

curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : "logstash-2013.12.10",
"shard" : 0,
"node" : "Q6hyVtoPTrSxm_xIGTg3CQ",
"allow_primary": 1
}
}
]
}'

(first without allow_primary, but it throws error "trying to allocate a
primary shard [logstash-2013.12.10][0]], which is disabled" so I used the
allow_primary flag), but also throws some exception:

org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[logstash-2013.12.10][0] shard allocated for local recovery (post api),
should exists, but doesn't

So I really don't know, what can I do or if I am doing right steps.
Can somebody give me an advice, please?

Thank you

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d0585313-ee35-435b-b530-ff8389a2577c%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey,

did you do any allocation specific configuration? Disabling allocation at
all? Anything in your cluster settings or in your configuration? Did you do
any configuration before firing up your cluster or remember setting any
special option?

Can you reproduce this when you set up a new system, so we could reproduce
this behaviour locally as well?

--Alex

On Thu, Dec 19, 2013 at 3:17 PM, Honza dufeja@gmail.com wrote:

Hi there,
few weeks ago I had a problem with some unassigned shards, where I had
same number of unassigned and assigned and I solved that thanks to an
advice (herehttps://groups.google.com/forum/#!searchin/elasticsearch/unassigned$20shards/elasticsearch/Y2QQ-G0hICM/weIznt5PkKQJ)
with adding a new node.
But now it appeared another problem. I had unassigned shards even 2 nodes
were running. So I decided to turn off the replica. That caused
disappearing of half shards (of course), but some unassigned were still
remaining.

So I tried to add few new nodes. I ended on 10 nodes. Thanks to that some
shards disappeared, but most of them did not. But if I turn them down and
then start only one node again (it should be enough without replica), this
is the health status:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 1,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 198
}

So 1/10 was assigned and 9/10 was not. It seems the shards are still
"connected" to old nodes and I have to reroute them to the only one node.

But I couldn't do it.
I used thishttp://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html webpage
and following code:

curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : "logstash-2013.12.10",
"shard" : 0,
"node" : "Q6hyVtoPTrSxm_xIGTg3CQ",
"allow_primary": 1
}
}
]
}'

(first without allow_primary, but it throws error "trying to allocate a
primary shard [logstash-2013.12.10][0]], which is disabled" so I used the
allow_primary flag), but also throws some exception:

org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[logstash-2013.12.10][0] shard allocated for local recovery (post api),
should exists, but doesn't

So I really don't know, what can I do or if I am doing right steps.
Can somebody give me an advice, please?

Thank you

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/d0585313-ee35-435b-b530-ff8389a2577c%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGCwEM81Bif0JJjXLB%2Bw1wfg_LLUhxHi%2B1Rg%3D3c1u9dW-d05_Q%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hello,
thank you for the answer.
I didn't do any specific configuration or anything nonstandard. But I had
the problem I have mentioned, that there was set 1 replica by default and I
had only one node, which I resolved with second node. Then I had some
troubles with too many open files limit and that I solved with setting
ulimit.

Then it worked few weeks without problem, but then the shards started to be
unassigned. I found it yesterday, but because there are made indicies by
logstash every day, I know that shards older then 5 days are ok, but newer
are not (I should mentioned that
the index.routing.allocation.total_shards_per_node is always -1).

So I tried the thing with removing replica and adding nodes, but it made it
worse.

But the point is, that the shards seem to be all right. They are not
assigned only. So I only need to assigne them to the one node and I think
it will be ok. Do you think is it possible, please?

Thank you

Dne čtvrtek, 19. prosince 2013 16:39:41 UTC+1 Alexander Reelsen napsal(a):

Hey,

did you do any allocation specific configuration? Disabling allocation at
all? Anything in your cluster settings or in your configuration? Did you do
any configuration before firing up your cluster or remember setting any
special option?

Can you reproduce this when you set up a new system, so we could reproduce
this behaviour locally as well?

--Alex

On Thu, Dec 19, 2013 at 3:17 PM, Honza <duf...@gmail.com <javascript:>>wrote:

Hi there,
few weeks ago I had a problem with some unassigned shards, where I had
same number of unassigned and assigned and I solved that thanks to an
advice (herehttps://groups.google.com/forum/#!searchin/elasticsearch/unassigned$20shards/elasticsearch/Y2QQ-G0hICM/weIznt5PkKQJ)
with adding a new node.
But now it appeared another problem. I had unassigned shards even 2 nodes
were running. So I decided to turn off the replica. That caused
disappearing of half shards (of course), but some unassigned were still
remaining.

So I tried to add few new nodes. I ended on 10 nodes. Thanks to that some
shards disappeared, but most of them did not. But if I turn them down and
then start only one node again (it should be enough without replica), this
is the health status:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 1,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 198
}

So 1/10 was assigned and 9/10 was not. It seems the shards are still
"connected" to old nodes and I have to reroute them to the only one node.

But I couldn't do it.
I used thishttp://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html webpage
and following code:

curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : "logstash-2013.12.10",
"shard" : 0,
"node" : "Q6hyVtoPTrSxm_xIGTg3CQ",
"allow_primary": 1
}
}
]
}'

(first without allow_primary, but it throws error "trying to allocate a
primary shard [logstash-2013.12.10][0]], which is disabled" so I used the
allow_primary flag), but also throws some exception:

org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[logstash-2013.12.10][0] shard allocated for local recovery (post api),
should exists, but doesn't

So I really don't know, what can I do or if I am doing right steps.
Can somebody give me an advice, please?

Thank you

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/d0585313-ee35-435b-b530-ff8389a2577c%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/fb737aa1-313c-4ab4-8bbe-3a1898f5288c%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey,

there must be some log files which contain some reasons, why a shard
suddenly could not be assigned anymore. Can you check, whether you can dig
up any information in the logs, why your setup worked so long and then
suddenly didnt...

Also when you create an index manually (curl -X PUT node:9200/mytest) -
does this index get correctly created and assigned?

--Alex

On Thu, Dec 19, 2013 at 5:39 PM, Honza dufeja@gmail.com wrote:

Hello,
thank you for the answer.
I didn't do any specific configuration or anything nonstandard. But I had
the problem I have mentioned, that there was set 1 replica by default and I
had only one node, which I resolved with second node. Then I had some
troubles with too many open files limit and that I solved with setting
ulimit.

Then it worked few weeks without problem, but then the shards started to
be unassigned. I found it yesterday, but because there are made indicies by
logstash every day, I know that shards older then 5 days are ok, but newer
are not (I should mentioned that
the index.routing.allocation.total_shards_per_node is always -1).

So I tried the thing with removing replica and adding nodes, but it made
it worse.

But the point is, that the shards seem to be all right. They are not
assigned only. So I only need to assigne them to the one node and I think
it will be ok. Do you think is it possible, please?

Thank you

Dne čtvrtek, 19. prosince 2013 16:39:41 UTC+1 Alexander Reelsen napsal(a):

Hey,

did you do any allocation specific configuration? Disabling allocation at
all? Anything in your cluster settings or in your configuration? Did you do
any configuration before firing up your cluster or remember setting any
special option?

Can you reproduce this when you set up a new system, so we could
reproduce this behaviour locally as well?

--Alex

On Thu, Dec 19, 2013 at 3:17 PM, Honza duf...@gmail.com wrote:

Hi there,
few weeks ago I had a problem with some unassigned shards, where I had
same number of unassigned and assigned and I solved that thanks to an
advice (herehttps://groups.google.com/forum/#!searchin/elasticsearch/unassigned$20shards/elasticsearch/Y2QQ-G0hICM/weIznt5PkKQJ)
with adding a new node.
But now it appeared another problem. I had unassigned shards even 2
nodes were running. So I decided to turn off the replica. That caused
disappearing of half shards (of course), but some unassigned were still
remaining.

So I tried to add few new nodes. I ended on 10 nodes. Thanks to that
some shards disappeared, but most of them did not. But if I turn them down
and then start only one node again (it should be enough without replica),
this is the health status:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 1,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 198
}

So 1/10 was assigned and 9/10 was not. It seems the shards are still
"connected" to old nodes and I have to reroute them to the only one node.

But I couldn't do it.
I used thishttp://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html webpage
and following code:

curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : "logstash-2013.12.10",
"shard" : 0,
"node" : "Q6hyVtoPTrSxm_xIGTg3CQ",
"allow_primary": 1
}
}
]
}'

(first without allow_primary, but it throws error "trying to allocate a
primary shard [logstash-2013.12.10][0]], which is disabled" so I used the
allow_primary flag), but also throws some exception:

org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[logstash-2013.12.10][0] shard allocated for local recovery (post api),
should exists, but doesn't

So I really don't know, what can I do or if I am doing right steps.
Can somebody give me an advice, please?

Thank you

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/d0585313-ee35-435b-b530-ff8389a2577c%
40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/fb737aa1-313c-4ab4-8bbe-3a1898f5288c%40googlegroups.com
.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGCwEM_%3DnugTEks6D5ya1fwjQhMdJzY%2Bq39pVdGDLzY6hCvdQg%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hello,

I'm joining this topic because I'm having the same king of issue on my system.
I'm trying to build a log indexation engine based on elastic search and I have:
ES master node
ES slave node
logstash

logstash outputs to ES slave node:
output {
elasticsearch {
bind_host => "10.30.19.87"
cluster => "ubiqube"
}
}

the issue is that every time a new logstash index is created, it is unassigned but appart from this it looks fine.

I did the test with curl -X PUT localhost:9200/mytest

and this new index is also created as unassigned.
with the "head" plugin I can see each shard status:

{

state: UNASSIGNED
primary: false
node: null
relocating_node: null
shard: 3
index: mytest

}

Any idea?

logstash and ES version V1-1-1

Antoine Brun