One shard continually fails to allocate

I have one shard that continually fails to allocate.

There is nothing in the logs that would seem to indicate a problem on any
of the servers.

The pattern of one of the copies of shard '2' not being allocated runs
throughout all my logstash indexes.

Running 1.4.3 on all nodes.

Any pointers on what I should check?

Thanks,

-A

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e1a6111e-70a3-412a-a666-da61c479ee53%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

It's a replica?
Might be because you are running low on disk space?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 01:16, Aaron de Bruyn aaron@heyaaron.com a écrit :

I have one shard that continually fails to allocate.

There is nothing in the logs that would seem to indicate a problem on any of the servers.

The pattern of one of the copies of shard '2' not being allocated runs throughout all my logstash indexes.

Running 1.4.3 on all nodes.

Any pointers on what I should check?

Thanks,

-A

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e1a6111e-70a3-412a-a666-da61c479ee53%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
<Screenshot from 2015-02-17 14:51:10.png>

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/E00B59E6-1D1C-4727-AD0F-ABA5291D0E56%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

All the servers have nearly 1 TB free space.

-A

On Tue, Feb 17, 2015 at 7:44 PM, David Pilato david@pilato.fr wrote:

It's a replica?
Might be because you are running low on disk space?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 01:16, Aaron de Bruyn aaron@heyaaron.com a écrit :

I have one shard that continually fails to allocate.

There is nothing in the logs that would seem to indicate a problem on any of
the servers.

The pattern of one of the copies of shard '2' not being allocated runs
throughout all my logstash indexes.

Running 1.4.3 on all nodes.

Any pointers on what I should check?

Thanks,

-A

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/e1a6111e-70a3-412a-a666-da61c479ee53%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

<Screenshot from 2015-02-17 14:51:10.png>

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/E00B59E6-1D1C-4727-AD0F-ABA5291D0E56%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGqur9RWW%2BorDGDCji9cUvp7Y2XmcT8D4M%3DCLx7%2BkiWofg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

What gives http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-shards.html#cat-shards
?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 06:44, Aaron C. de Bruyn aaron@heyaaron.com a écrit :

All the servers have nearly 1 TB free space.

-A

On Tue, Feb 17, 2015 at 7:44 PM, David Pilato david@pilato.fr wrote:
It's a replica?
Might be because you are running low on disk space?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 01:16, Aaron de Bruyn aaron@heyaaron.com a écrit :

I have one shard that continually fails to allocate.

There is nothing in the logs that would seem to indicate a problem on any of
the servers.

The pattern of one of the copies of shard '2' not being allocated runs
throughout all my logstash indexes.

Running 1.4.3 on all nodes.

Any pointers on what I should check?

Thanks,

-A

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/e1a6111e-70a3-412a-a666-da61c479ee53%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

<Screenshot from 2015-02-17 14:51:10.png>

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/E00B59E6-1D1C-4727-AD0F-ABA5291D0E56%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGqur9RWW%2BorDGDCji9cUvp7Y2XmcT8D4M%3DCLx7%2BkiWofg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/DBEB16C3-9732-42A3-BE1E-7A99496CF590%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

I did some playing last night, but was unable to figure it out.
Looking at the shards this morning gives me this:

root@tetrad:~# curl -s '127.0.0.1:9200/_cat/shards?v'
index shard prirep state docs store ip node
intranet 2 p STARTED 68 56.9kb 127.0.0.1 escorp
intranet 2 r STARTED 68 56.9kb 127.0.0.1 backupnas1
intranet 2 r STARTED 68 56.9kb 127.0.1.1 tetrad
intranet 0 r STARTED 62 43.3kb 127.0.0.1 escorp
intranet 0 p STARTED 62 43.3kb 127.0.0.1 backupnas1
intranet 0 r STARTED 62 43.3kb 127.0.1.1 tetrad
intranet 3 p STARTED 66 47.8kb 127.0.0.1 escorp
intranet 3 r STARTED 66 47.8kb 127.0.0.1 backupnas1
intranet 3 r STARTED 66 47.8kb 127.0.1.1 tetrad
intranet 1 p STARTED 69 58.1kb 127.0.0.1 escorp
intranet 1 r STARTED 69 58.1kb 127.0.0.1 backupnas1
intranet 1 r STARTED 69 55.2kb 127.0.1.1 tetrad
intranet 4 p STARTED 64 43.9kb 127.0.0.1 escorp
intranet 4 r STARTED 64 46.8kb 127.0.0.1 backupnas1
intranet 4 r STARTED 64 43.9kb 127.0.1.1 tetrad
logstash-2015.02.09 4 p STARTED 0 115b 127.0.1.1 tetrad
logstash-2015.02.09 4 r UNASSIGNED
logstash-2015.02.09 0 r STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.09 0 p STARTED 0 115b 127.0.0.1 backupnas1
logstash-2015.02.09 3 p STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.09 3 r STARTED 0 115b 127.0.1.1 tetrad
logstash-2015.02.09 1 p STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.09 1 r STARTED 0 115b 127.0.0.1 backupnas1
logstash-2015.02.09 2 r STARTED 1 7.4kb 127.0.0.1 escorp
logstash-2015.02.09 2 p STARTED 1 7.4kb 127.0.0.1 backupnas1
logstash-2015.02.16 4 p STARTED 2538505 774mb 127.0.1.1 tetrad
logstash-2015.02.16 0 p STARTED 3168221 1.1gb 127.0.0.1 backupnas1
logstash-2015.02.16 3 p STARTED 3171176 1.1gb 127.0.0.1 backupnas1
logstash-2015.02.16 1 p STARTED 2543041 773.9mb 127.0.1.1 tetrad
logstash-2015.02.16 2 p STARTED 3169607 1.1gb 127.0.0.1 escorp
logstash-2015.02.07 2 p STARTED 0 115b 127.0.0.1 backupnas1
logstash-2015.02.07 2 r STARTED 0 115b 127.0.1.1 tetrad
logstash-2015.02.07 0 r STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.07 0 p STARTED 0 115b 127.0.0.1 backupnas1
logstash-2015.02.07 3 p STARTED 1 7.4kb 127.0.0.1 escorp
logstash-2015.02.07 3 r STARTED 1 7.4kb 127.0.1.1 tetrad
logstash-2015.02.07 1 p STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.07 1 r STARTED 0 115b 127.0.0.1 backupnas1
logstash-2015.02.07 4 r STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.07 4 p STARTED 0 115b 127.0.0.1 backupnas1
logstash 2 r STARTED 4 45.8kb 127.0.0.1 escorp
logstash 2 p STARTED 4 45.8kb 127.0.0.1 backupnas1
logstash 2 r STARTED 4 45.8kb 127.0.1.1 tetrad
logstash 0 p STARTED 1 10.9kb 127.0.0.1 escorp
logstash 0 r STARTED 1 10.9kb 127.0.0.1 backupnas1
logstash 0 r STARTED 1 10.9kb 127.0.1.1 tetrad
logstash 3 p STARTED 0 115b 127.0.0.1 escorp
logstash 3 r STARTED 0 115b 127.0.0.1 backupnas1
logstash 3 r STARTED 0 115b 127.0.1.1 tetrad
logstash 1 r STARTED 10 66.5kb 127.0.0.1 escorp
logstash 1 p STARTED 10 66.5kb 127.0.0.1 backupnas1
logstash 1 r STARTED 10 66.5kb 127.0.1.1 tetrad
logstash 4 p STARTED 2 25.3kb 127.0.0.1 escorp
logstash 4 r STARTED 2 25.3kb 127.0.0.1 backupnas1
logstash 4 r STARTED 2 25.3kb 127.0.1.1 tetrad
logstash-2015.02.15 4 p STARTED 4207611 907.5mb 127.0.1.1 tetrad
logstash-2015.02.15 0 p STARTED 4208955 908.6mb 127.0.1.1 tetrad
logstash-2015.02.15 3 p STARTED 4209006 909.1mb 127.0.1.1 tetrad
logstash-2015.02.15 1 p STARTED 4213071 909.5mb 127.0.1.1 tetrad
logstash-2015.02.15 2 p STARTED 4210380 909.1mb 127.0.1.1 tetrad
logstash-2015.02.18 4 p STARTED 3875654 1.5gb 127.0.0.1 escorp
logstash-2015.02.18 4 r STARTED 3875515 1.5gb 127.0.0.1 backupnas1
logstash-2015.02.18 0 p STARTED 3877255 1.5gb 127.0.0.1 escorp
logstash-2015.02.18 0 r STARTED 3877436 1.5gb 127.0.0.1 backupnas1
logstash-2015.02.18 3 r STARTED 3877663 1.5gb 127.0.0.1 escorp
logstash-2015.02.18 3 p STARTED 3877747 1.5gb 127.0.0.1 backupnas1
logstash-2015.02.18 1 p STARTED 3878167 1.5gb 127.0.0.1 backupnas1
logstash-2015.02.18 1 r STARTED 3877947 1.5gb 127.0.1.1 tetrad
logstash-2015.02.18 2 p STARTED 3876279 1.5gb 127.0.0.1 escorp
logstash-2015.02.18 2 r STARTED 3876279 1.5gb 127.0.1.1 tetrad
logstash-2015.02.17 2 p STARTED 6389194 2.4gb 127.0.1.1 tetrad
logstash-2015.02.17 2 r UNASSIGNED
logstash-2015.02.17 0 p STARTED 6383763 2.4gb 127.0.0.1 backupnas1
logstash-2015.02.17 0 r STARTED 6383763 2.4gb 127.0.1.1 tetrad
logstash-2015.02.17 3 r STARTED 6384804 2.4gb 127.0.0.1 escorp
logstash-2015.02.17 3 p STARTED 6384804 2.4gb 127.0.0.1 backupnas1
logstash-2015.02.17 1 p STARTED 6389242 2.4gb 127.0.0.1 escorp
logstash-2015.02.17 1 r STARTED 6389242 2.4gb 127.0.0.1 backupnas1
logstash-2015.02.17 4 p STARTED 6390483 2.4gb 127.0.0.1 escorp
logstash-2015.02.17 4 r STARTED 6390483 2.4gb 127.0.1.1 tetrad
logstash-2015.02.13 4 p STARTED 5047 6.4mb 127.0.1.1 tetrad
logstash-2015.02.13 0 p STARTED 4876 6.7mb 127.0.1.1 tetrad
logstash-2015.02.13 3 p STARTED 4882 6.1mb 127.0.1.1 tetrad
logstash-2015.02.13 1 p STARTED 4864 6.2mb 127.0.1.1 tetrad
logstash-2015.02.13 2 p STARTED 5069 6.4mb 127.0.1.1 tetrad
logstash-2015.02.14 4 p STARTED 856084 629mb 127.0.0.1 escorp
logstash-2015.02.14 0 p STARTED 854612 627.7mb 127.0.0.1 backupnas1
logstash-2015.02.14 3 p STARTED 27866 31.2mb 127.0.1.1 tetrad
logstash-2015.02.14 1 p STARTED 854210 627.4mb 127.0.0.1 backupnas1
logstash-2015.02.14 2 p STARTED 854415 629mb 127.0.0.1 escorp
logstash-2015.02.12 2 p STARTED 5907155 2.1gb 127.0.0.1 escorp
logstash-2015.02.12 0 p STARTED 5912536 2.1gb 127.0.0.1 backupnas1
logstash-2015.02.12 3 p STARTED 5913759 2.1gb 127.0.0.1 escorp
logstash-2015.02.12 1 p STARTED 5910865 2.1gb 127.0.0.1 backupnas1
logstash-2015.02.12 4 p STARTED 5907732 2.1gb 127.0.0.1 escorp

-A

On Tue, Feb 17, 2015 at 11:48 PM, David Pilato david@pilato.fr wrote:

What gives
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-shards.html#cat-shards
?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 06:44, Aaron C. de Bruyn aaron@heyaaron.com a écrit :

All the servers have nearly 1 TB free space.

-A

On Tue, Feb 17, 2015 at 7:44 PM, David Pilato david@pilato.fr wrote:

It's a replica?

Might be because you are running low on disk space?

--

David :wink:

Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 01:16, Aaron de Bruyn aaron@heyaaron.com a écrit :

I have one shard that continually fails to allocate.

There is nothing in the logs that would seem to indicate a problem on any of

the servers.

The pattern of one of the copies of shard '2' not being allocated runs

throughout all my logstash indexes.

Running 1.4.3 on all nodes.

Any pointers on what I should check?

Thanks,

-A

--

You received this message because you are subscribed to the Google Groups

"elasticsearch" group.

To unsubscribe from this group and stop receiving emails from it, send an

email to elasticsearch+unsubscribe@googlegroups.com.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/e1a6111e-70a3-412a-a666-da61c479ee53%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

<Screenshot from 2015-02-17 14:51:10.png>

--

You received this message because you are subscribed to a topic in the

Google Groups "elasticsearch" group.

To unsubscribe from this topic, visit

https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe.

To unsubscribe from this group and all its topics, send an email to

elasticsearch+unsubscribe@googlegroups.com.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/E00B59E6-1D1C-4727-AD0F-ABA5291D0E56%40pilato.fr.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGqur9RWW%2BorDGDCji9cUvp7Y2XmcT8D4M%3DCLx7%2BkiWofg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/DBEB16C3-9732-42A3-BE1E-7A99496CF590%40pilato.fr.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGqTrHu7SBw1DzNo0PYWVHRKcsNXBAa-YWOA-YUEnNMWoQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

And this?

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-pending-tasks.html http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-pending-tasks.html

David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr https://twitter.com/elasticsearchfr | @scrutmydocs https://twitter.com/scrutmydocs

Le 18 févr. 2015 à 17:44, Aaron C. de Bruyn aaron@heyaaron.com a écrit :

I did some playing last night, but was unable to figure it out.
Looking at the shards this morning gives me this:

root@tetrad:~# curl -s '127.0.0.1:9200/_cat/shards?v'
index shard prirep state docs store ip node
intranet 2 p STARTED 68 56.9kb 127.0.0.1 escorp
intranet 2 r STARTED 68 56.9kb 127.0.0.1 backupnas1
intranet 2 r STARTED 68 56.9kb 127.0.1.1 tetrad
intranet 0 r STARTED 62 43.3kb 127.0.0.1 escorp
intranet 0 p STARTED 62 43.3kb 127.0.0.1 backupnas1
intranet 0 r STARTED 62 43.3kb 127.0.1.1 tetrad
intranet 3 p STARTED 66 47.8kb 127.0.0.1 escorp
intranet 3 r STARTED 66 47.8kb 127.0.0.1 backupnas1
intranet 3 r STARTED 66 47.8kb 127.0.1.1 tetrad
intranet 1 p STARTED 69 58.1kb 127.0.0.1 escorp
intranet 1 r STARTED 69 58.1kb 127.0.0.1 backupnas1
intranet 1 r STARTED 69 55.2kb 127.0.1.1 tetrad
intranet 4 p STARTED 64 43.9kb 127.0.0.1 escorp
intranet 4 r STARTED 64 46.8kb 127.0.0.1 backupnas1
intranet 4 r STARTED 64 43.9kb 127.0.1.1 tetrad
logstash-2015.02.09 4 p STARTED 0 115b 127.0.1.1 tetrad
logstash-2015.02.09 4 r UNASSIGNED
logstash-2015.02.09 0 r STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.09 0 p STARTED 0 115b 127.0.0.1 backupnas1
logstash-2015.02.09 3 p STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.09 3 r STARTED 0 115b 127.0.1.1 tetrad
logstash-2015.02.09 1 p STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.09 1 r STARTED 0 115b 127.0.0.1 backupnas1
logstash-2015.02.09 2 r STARTED 1 7.4kb 127.0.0.1 escorp
logstash-2015.02.09 2 p STARTED 1 7.4kb 127.0.0.1 backupnas1
logstash-2015.02.16 4 p STARTED 2538505 774mb 127.0.1.1 tetrad
logstash-2015.02.16 0 p STARTED 3168221 1.1gb 127.0.0.1 backupnas1
logstash-2015.02.16 3 p STARTED 3171176 1.1gb 127.0.0.1 backupnas1
logstash-2015.02.16 1 p STARTED 2543041 773.9mb 127.0.1.1 tetrad
logstash-2015.02.16 2 p STARTED 3169607 1.1gb 127.0.0.1 escorp
logstash-2015.02.07 2 p STARTED 0 115b 127.0.0.1 backupnas1
logstash-2015.02.07 2 r STARTED 0 115b 127.0.1.1 tetrad
logstash-2015.02.07 0 r STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.07 0 p STARTED 0 115b 127.0.0.1 backupnas1
logstash-2015.02.07 3 p STARTED 1 7.4kb 127.0.0.1 escorp
logstash-2015.02.07 3 r STARTED 1 7.4kb 127.0.1.1 tetrad
logstash-2015.02.07 1 p STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.07 1 r STARTED 0 115b 127.0.0.1 backupnas1
logstash-2015.02.07 4 r STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.07 4 p STARTED 0 115b 127.0.0.1 backupnas1
logstash 2 r STARTED 4 45.8kb 127.0.0.1 escorp
logstash 2 p STARTED 4 45.8kb 127.0.0.1 backupnas1
logstash 2 r STARTED 4 45.8kb 127.0.1.1 tetrad
logstash 0 p STARTED 1 10.9kb 127.0.0.1 escorp
logstash 0 r STARTED 1 10.9kb 127.0.0.1 backupnas1
logstash 0 r STARTED 1 10.9kb 127.0.1.1 tetrad
logstash 3 p STARTED 0 115b 127.0.0.1 escorp
logstash 3 r STARTED 0 115b 127.0.0.1 backupnas1
logstash 3 r STARTED 0 115b 127.0.1.1 tetrad
logstash 1 r STARTED 10 66.5kb 127.0.0.1 escorp
logstash 1 p STARTED 10 66.5kb 127.0.0.1 backupnas1
logstash 1 r STARTED 10 66.5kb 127.0.1.1 tetrad
logstash 4 p STARTED 2 25.3kb 127.0.0.1 escorp
logstash 4 r STARTED 2 25.3kb 127.0.0.1 backupnas1
logstash 4 r STARTED 2 25.3kb 127.0.1.1 tetrad
logstash-2015.02.15 4 p STARTED 4207611 907.5mb 127.0.1.1 tetrad
logstash-2015.02.15 0 p STARTED 4208955 908.6mb 127.0.1.1 tetrad
logstash-2015.02.15 3 p STARTED 4209006 909.1mb 127.0.1.1 tetrad
logstash-2015.02.15 1 p STARTED 4213071 909.5mb 127.0.1.1 tetrad
logstash-2015.02.15 2 p STARTED 4210380 909.1mb 127.0.1.1 tetrad
logstash-2015.02.18 4 p STARTED 3875654 1.5gb 127.0.0.1 escorp
logstash-2015.02.18 4 r STARTED 3875515 1.5gb 127.0.0.1 backupnas1
logstash-2015.02.18 0 p STARTED 3877255 1.5gb 127.0.0.1 escorp
logstash-2015.02.18 0 r STARTED 3877436 1.5gb 127.0.0.1 backupnas1
logstash-2015.02.18 3 r STARTED 3877663 1.5gb 127.0.0.1 escorp
logstash-2015.02.18 3 p STARTED 3877747 1.5gb 127.0.0.1 backupnas1
logstash-2015.02.18 1 p STARTED 3878167 1.5gb 127.0.0.1 backupnas1
logstash-2015.02.18 1 r STARTED 3877947 1.5gb 127.0.1.1 tetrad
logstash-2015.02.18 2 p STARTED 3876279 1.5gb 127.0.0.1 escorp
logstash-2015.02.18 2 r STARTED 3876279 1.5gb 127.0.1.1 tetrad
logstash-2015.02.17 2 p STARTED 6389194 2.4gb 127.0.1.1 tetrad
logstash-2015.02.17 2 r UNASSIGNED
logstash-2015.02.17 0 p STARTED 6383763 2.4gb 127.0.0.1 backupnas1
logstash-2015.02.17 0 r STARTED 6383763 2.4gb 127.0.1.1 tetrad
logstash-2015.02.17 3 r STARTED 6384804 2.4gb 127.0.0.1 escorp
logstash-2015.02.17 3 p STARTED 6384804 2.4gb 127.0.0.1 backupnas1
logstash-2015.02.17 1 p STARTED 6389242 2.4gb 127.0.0.1 escorp
logstash-2015.02.17 1 r STARTED 6389242 2.4gb 127.0.0.1 backupnas1
logstash-2015.02.17 4 p STARTED 6390483 2.4gb 127.0.0.1 escorp
logstash-2015.02.17 4 r STARTED 6390483 2.4gb 127.0.1.1 tetrad
logstash-2015.02.13 4 p STARTED 5047 6.4mb 127.0.1.1 tetrad
logstash-2015.02.13 0 p STARTED 4876 6.7mb 127.0.1.1 tetrad
logstash-2015.02.13 3 p STARTED 4882 6.1mb 127.0.1.1 tetrad
logstash-2015.02.13 1 p STARTED 4864 6.2mb 127.0.1.1 tetrad
logstash-2015.02.13 2 p STARTED 5069 6.4mb 127.0.1.1 tetrad
logstash-2015.02.14 4 p STARTED 856084 629mb 127.0.0.1 escorp
logstash-2015.02.14 0 p STARTED 854612 627.7mb 127.0.0.1 backupnas1
logstash-2015.02.14 3 p STARTED 27866 31.2mb 127.0.1.1 tetrad
logstash-2015.02.14 1 p STARTED 854210 627.4mb 127.0.0.1 backupnas1
logstash-2015.02.14 2 p STARTED 854415 629mb 127.0.0.1 escorp
logstash-2015.02.12 2 p STARTED 5907155 2.1gb 127.0.0.1 escorp
logstash-2015.02.12 0 p STARTED 5912536 2.1gb 127.0.0.1 backupnas1
logstash-2015.02.12 3 p STARTED 5913759 2.1gb 127.0.0.1 escorp
logstash-2015.02.12 1 p STARTED 5910865 2.1gb 127.0.0.1 backupnas1
logstash-2015.02.12 4 p STARTED 5907732 2.1gb 127.0.0.1 escorp

-A

On Tue, Feb 17, 2015 at 11:48 PM, David Pilato <david@pilato.fr mailto:david@pilato.fr> wrote:

What gives
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-shards.html#cat-shards
?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 06:44, Aaron C. de Bruyn aaron@heyaaron.com a écrit :

All the servers have nearly 1 TB free space.

-A

On Tue, Feb 17, 2015 at 7:44 PM, David Pilato david@pilato.fr wrote:

It's a replica?

Might be because you are running low on disk space?

--

David :wink:

Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 01:16, Aaron de Bruyn aaron@heyaaron.com a écrit :

I have one shard that continually fails to allocate.

There is nothing in the logs that would seem to indicate a problem on any of

the servers.

The pattern of one of the copies of shard '2' not being allocated runs

throughout all my logstash indexes.

Running 1.4.3 on all nodes.

Any pointers on what I should check?

Thanks,

-A

--

You received this message because you are subscribed to the Google Groups

"elasticsearch" group.

To unsubscribe from this group and stop receiving emails from it, send an

email to elasticsearch+unsubscribe@googlegroups.com.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/e1a6111e-70a3-412a-a666-da61c479ee53%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

<Screenshot from 2015-02-17 14:51:10.png>

--

You received this message because you are subscribed to a topic in the

Google Groups "elasticsearch" group.

To unsubscribe from this topic, visit

https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe.

To unsubscribe from this group and all its topics, send an email to

elasticsearch+unsubscribe@googlegroups.com.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/E00B59E6-1D1C-4727-AD0F-ABA5291D0E56%40pilato.fr.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGqur9RWW%2BorDGDCji9cUvp7Y2XmcT8D4M%3DCLx7%2BkiWofg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/DBEB16C3-9732-42A3-BE1E-7A99496CF590%40pilato.fr https://groups.google.com/d/msgid/elasticsearch/DBEB16C3-9732-42A3-BE1E-7A99496CF590%40pilato.fr.

For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGqTrHu7SBw1DzNo0PYWVHRKcsNXBAa-YWOA-YUEnNMWoQ%40mail.gmail.com https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGqTrHu7SBw1DzNo0PYWVHRKcsNXBAa-YWOA-YUEnNMWoQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/09E9680F-89A1-4015-B59E-8EE332E3D27E%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

Nada. :wink:

root@tetrad:~# curl 'localhost:9200/_cat/pending_tasks?v'
insertOrder timeInQueue priority source
root@tetrad:~#

-A

On Wed, Feb 18, 2015 at 8:59 AM, David Pilato david@pilato.fr wrote:

And this?

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-pending-tasks.html

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 18 févr. 2015 à 17:44, Aaron C. de Bruyn aaron@heyaaron.com a écrit :

I did some playing last night, but was unable to figure it out.
Looking at the shards this morning gives me this:

root@tetrad:~# curl -s '127.0.0.1:9200/_cat/shards?v'
index shard prirep state docs store ip node
intranet 2 p STARTED 68 56.9kb 127.0.0.1 escorp
intranet 2 r STARTED 68 56.9kb 127.0.0.1
backupnas1
intranet 2 r STARTED 68 56.9kb 127.0.1.1 tetrad
intranet 0 r STARTED 62 43.3kb 127.0.0.1 escorp
intranet 0 p STARTED 62 43.3kb 127.0.0.1
backupnas1
intranet 0 r STARTED 62 43.3kb 127.0.1.1 tetrad
intranet 3 p STARTED 66 47.8kb 127.0.0.1 escorp
intranet 3 r STARTED 66 47.8kb 127.0.0.1
backupnas1
intranet 3 r STARTED 66 47.8kb 127.0.1.1 tetrad
intranet 1 p STARTED 69 58.1kb 127.0.0.1 escorp
intranet 1 r STARTED 69 58.1kb 127.0.0.1
backupnas1
intranet 1 r STARTED 69 55.2kb 127.0.1.1 tetrad
intranet 4 p STARTED 64 43.9kb 127.0.0.1 escorp
intranet 4 r STARTED 64 46.8kb 127.0.0.1
backupnas1
intranet 4 r STARTED 64 43.9kb 127.0.1.1 tetrad
logstash-2015.02.09 4 p STARTED 0 115b 127.0.1.1 tetrad
logstash-2015.02.09 4 r UNASSIGNED
logstash-2015.02.09 0 r STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.09 0 p STARTED 0 115b 127.0.0.1
backupnas1
logstash-2015.02.09 3 p STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.09 3 r STARTED 0 115b 127.0.1.1 tetrad
logstash-2015.02.09 1 p STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.09 1 r STARTED 0 115b 127.0.0.1
backupnas1
logstash-2015.02.09 2 r STARTED 1 7.4kb 127.0.0.1 escorp
logstash-2015.02.09 2 p STARTED 1 7.4kb 127.0.0.1
backupnas1
logstash-2015.02.16 4 p STARTED 2538505 774mb 127.0.1.1 tetrad
logstash-2015.02.16 0 p STARTED 3168221 1.1gb 127.0.0.1
backupnas1
logstash-2015.02.16 3 p STARTED 3171176 1.1gb 127.0.0.1
backupnas1
logstash-2015.02.16 1 p STARTED 2543041 773.9mb 127.0.1.1 tetrad
logstash-2015.02.16 2 p STARTED 3169607 1.1gb 127.0.0.1 escorp
logstash-2015.02.07 2 p STARTED 0 115b 127.0.0.1
backupnas1
logstash-2015.02.07 2 r STARTED 0 115b 127.0.1.1 tetrad
logstash-2015.02.07 0 r STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.07 0 p STARTED 0 115b 127.0.0.1
backupnas1
logstash-2015.02.07 3 p STARTED 1 7.4kb 127.0.0.1 escorp
logstash-2015.02.07 3 r STARTED 1 7.4kb 127.0.1.1 tetrad
logstash-2015.02.07 1 p STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.07 1 r STARTED 0 115b 127.0.0.1
backupnas1
logstash-2015.02.07 4 r STARTED 0 115b 127.0.0.1 escorp
logstash-2015.02.07 4 p STARTED 0 115b 127.0.0.1
backupnas1
logstash 2 r STARTED 4 45.8kb 127.0.0.1 escorp
logstash 2 p STARTED 4 45.8kb 127.0.0.1
backupnas1
logstash 2 r STARTED 4 45.8kb 127.0.1.1 tetrad
logstash 0 p STARTED 1 10.9kb 127.0.0.1 escorp
logstash 0 r STARTED 1 10.9kb 127.0.0.1
backupnas1
logstash 0 r STARTED 1 10.9kb 127.0.1.1 tetrad
logstash 3 p STARTED 0 115b 127.0.0.1 escorp
logstash 3 r STARTED 0 115b 127.0.0.1
backupnas1
logstash 3 r STARTED 0 115b 127.0.1.1 tetrad
logstash 1 r STARTED 10 66.5kb 127.0.0.1 escorp
logstash 1 p STARTED 10 66.5kb 127.0.0.1
backupnas1
logstash 1 r STARTED 10 66.5kb 127.0.1.1 tetrad
logstash 4 p STARTED 2 25.3kb 127.0.0.1 escorp
logstash 4 r STARTED 2 25.3kb 127.0.0.1
backupnas1
logstash 4 r STARTED 2 25.3kb 127.0.1.1 tetrad
logstash-2015.02.15 4 p STARTED 4207611 907.5mb 127.0.1.1 tetrad
logstash-2015.02.15 0 p STARTED 4208955 908.6mb 127.0.1.1 tetrad
logstash-2015.02.15 3 p STARTED 4209006 909.1mb 127.0.1.1 tetrad
logstash-2015.02.15 1 p STARTED 4213071 909.5mb 127.0.1.1 tetrad
logstash-2015.02.15 2 p STARTED 4210380 909.1mb 127.0.1.1 tetrad
logstash-2015.02.18 4 p STARTED 3875654 1.5gb 127.0.0.1 escorp
logstash-2015.02.18 4 r STARTED 3875515 1.5gb 127.0.0.1
backupnas1
logstash-2015.02.18 0 p STARTED 3877255 1.5gb 127.0.0.1 escorp
logstash-2015.02.18 0 r STARTED 3877436 1.5gb 127.0.0.1
backupnas1
logstash-2015.02.18 3 r STARTED 3877663 1.5gb 127.0.0.1 escorp
logstash-2015.02.18 3 p STARTED 3877747 1.5gb 127.0.0.1
backupnas1
logstash-2015.02.18 1 p STARTED 3878167 1.5gb 127.0.0.1
backupnas1
logstash-2015.02.18 1 r STARTED 3877947 1.5gb 127.0.1.1 tetrad
logstash-2015.02.18 2 p STARTED 3876279 1.5gb 127.0.0.1 escorp
logstash-2015.02.18 2 r STARTED 3876279 1.5gb 127.0.1.1 tetrad
logstash-2015.02.17 2 p STARTED 6389194 2.4gb 127.0.1.1 tetrad
logstash-2015.02.17 2 r UNASSIGNED
logstash-2015.02.17 0 p STARTED 6383763 2.4gb 127.0.0.1
backupnas1
logstash-2015.02.17 0 r STARTED 6383763 2.4gb 127.0.1.1 tetrad
logstash-2015.02.17 3 r STARTED 6384804 2.4gb 127.0.0.1 escorp
logstash-2015.02.17 3 p STARTED 6384804 2.4gb 127.0.0.1
backupnas1
logstash-2015.02.17 1 p STARTED 6389242 2.4gb 127.0.0.1 escorp
logstash-2015.02.17 1 r STARTED 6389242 2.4gb 127.0.0.1
backupnas1
logstash-2015.02.17 4 p STARTED 6390483 2.4gb 127.0.0.1 escorp
logstash-2015.02.17 4 r STARTED 6390483 2.4gb 127.0.1.1 tetrad
logstash-2015.02.13 4 p STARTED 5047 6.4mb 127.0.1.1 tetrad
logstash-2015.02.13 0 p STARTED 4876 6.7mb 127.0.1.1 tetrad
logstash-2015.02.13 3 p STARTED 4882 6.1mb 127.0.1.1 tetrad
logstash-2015.02.13 1 p STARTED 4864 6.2mb 127.0.1.1 tetrad
logstash-2015.02.13 2 p STARTED 5069 6.4mb 127.0.1.1 tetrad
logstash-2015.02.14 4 p STARTED 856084 629mb 127.0.0.1 escorp
logstash-2015.02.14 0 p STARTED 854612 627.7mb 127.0.0.1
backupnas1
logstash-2015.02.14 3 p STARTED 27866 31.2mb 127.0.1.1 tetrad
logstash-2015.02.14 1 p STARTED 854210 627.4mb 127.0.0.1
backupnas1
logstash-2015.02.14 2 p STARTED 854415 629mb 127.0.0.1 escorp
logstash-2015.02.12 2 p STARTED 5907155 2.1gb 127.0.0.1 escorp
logstash-2015.02.12 0 p STARTED 5912536 2.1gb 127.0.0.1
backupnas1
logstash-2015.02.12 3 p STARTED 5913759 2.1gb 127.0.0.1 escorp
logstash-2015.02.12 1 p STARTED 5910865 2.1gb 127.0.0.1
backupnas1
logstash-2015.02.12 4 p STARTED 5907732 2.1gb 127.0.0.1 escorp

-A

On Tue, Feb 17, 2015 at 11:48 PM, David Pilato david@pilato.fr wrote:

What gives
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-shards.html#cat-shards
?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 06:44, Aaron C. de Bruyn aaron@heyaaron.com a écrit :

All the servers have nearly 1 TB free space.

-A

On Tue, Feb 17, 2015 at 7:44 PM, David Pilato david@pilato.fr wrote:

It's a replica?

Might be because you are running low on disk space?

--

David :wink:

Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 01:16, Aaron de Bruyn aaron@heyaaron.com a écrit :

I have one shard that continually fails to allocate.

There is nothing in the logs that would seem to indicate a problem on any of

the servers.

The pattern of one of the copies of shard '2' not being allocated runs

throughout all my logstash indexes.

Running 1.4.3 on all nodes.

Any pointers on what I should check?

Thanks,

-A

--

You received this message because you are subscribed to the Google Groups

"elasticsearch" group.

To unsubscribe from this group and stop receiving emails from it, send an

email to elasticsearch+unsubscribe@googlegroups.com.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/e1a6111e-70a3-412a-a666-da61c479ee53%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

<Screenshot from 2015-02-17 14:51:10.png>

--

You received this message because you are subscribed to a topic in the

Google Groups "elasticsearch" group.

To unsubscribe from this topic, visit

https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe.

To unsubscribe from this group and all its topics, send an email to

elasticsearch+unsubscribe@googlegroups.com.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/E00B59E6-1D1C-4727-AD0F-ABA5291D0E56%40pilato.fr.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGqur9RWW%2BorDGDCji9cUvp7Y2XmcT8D4M%3DCLx7%2BkiWofg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/DBEB16C3-9732-42A3-BE1E-7A99496CF590%40pilato.fr.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGqTrHu7SBw1DzNo0PYWVHRKcsNXBAa-YWOA-YUEnNMWoQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/09E9680F-89A1-4015-B59E-8EE332E3D27E%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGqncfV66T7VsKjRwEOv%2B5rmZ%2B05qLUQafYGDex2-38waA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hey Aaron,
What do you get back if you try to use these sets of commands to manually
allocate the shard to a node?

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html

I had this problem before, but it turned out we had 1 node that had
accidentally be upgraded, and the rest were still on a previous version.
I was able to determine this be reading the error output from the shard
allocation command.

Todd

On Tuesday, February 17, 2015 at 11:48:48 PM UTC-8, David Pilato wrote:

What gives
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-shards.html#cat-shards
?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 06:44, Aaron C. de Bruyn <aa...@heyaaron.com
<javascript:>> a écrit :

All the servers have nearly 1 TB free space.

-A

On Tue, Feb 17, 2015 at 7:44 PM, David Pilato <da...@pilato.fr
<javascript:>> wrote:

It's a replica?

Might be because you are running low on disk space?

--

David :wink:

Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 01:16, Aaron de Bruyn <aa...@heyaaron.com <javascript:>>
a écrit :

I have one shard that continually fails to allocate.

There is nothing in the logs that would seem to indicate a problem on any
of

the servers.

The pattern of one of the copies of shard '2' not being allocated runs

throughout all my logstash indexes.

Running 1.4.3 on all nodes.

Any pointers on what I should check?

Thanks,

-A

--

You received this message because you are subscribed to the Google Groups

"elasticsearch" group.

To unsubscribe from this group and stop receiving emails from it, send an

email to elasticsearc...@googlegroups.com <javascript:>.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/e1a6111e-70a3-412a-a666-da61c479ee53%40googlegroups.com
.

For more options, visit https://groups.google.com/d/optout.

<Screenshot from 2015-02-17 14:51:10.png>

--

You received this message because you are subscribed to a topic in the

Google Groups "elasticsearch" group.

To unsubscribe from this topic, visit

https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe.

To unsubscribe from this group and all its topics, send an email to

elasticsearc...@googlegroups.com <javascript:>.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/E00B59E6-1D1C-4727-AD0F-ABA5291D0E56%40pilato.fr
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGqur9RWW%2BorDGDCji9cUvp7Y2XmcT8D4M%3DCLx7%2BkiWofg%40mail.gmail.com
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/91d7e258-6886-4344-b990-5ca4d0b2888c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

grumble
That's it.
Two of the nodes are FreeBSD, the other two are Linux.
It appears the two Linux nodes 'magically' updated themselves to 1.4.3...

Thanks for the help.

-A

On Wed, Feb 18, 2015 at 9:06 AM, Todd Nine tnine@apigee.com wrote:

Hey Aaron,
What do you get back if you try to use these sets of commands to manually
allocate the shard to a node?

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html

I had this problem before, but it turned out we had 1 node that had
accidentally be upgraded, and the rest were still on a previous version. I
was able to determine this be reading the error output from the shard
allocation command.

Todd

On Tuesday, February 17, 2015 at 11:48:48 PM UTC-8, David Pilato wrote:

What gives
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-shards.html#cat-shards
?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 06:44, Aaron C. de Bruyn aa...@heyaaron.com a écrit :

All the servers have nearly 1 TB free space.

-A

On Tue, Feb 17, 2015 at 7:44 PM, David Pilato da...@pilato.fr wrote:

It's a replica?

Might be because you are running low on disk space?

--

David :wink:

Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 18 févr. 2015 à 01:16, Aaron de Bruyn aa...@heyaaron.com a écrit :

I have one shard that continually fails to allocate.

There is nothing in the logs that would seem to indicate a problem on any
of

the servers.

The pattern of one of the copies of shard '2' not being allocated runs

throughout all my logstash indexes.

Running 1.4.3 on all nodes.

Any pointers on what I should check?

Thanks,

-A

--

You received this message because you are subscribed to the Google Groups

"elasticsearch" group.

To unsubscribe from this group and stop receiving emails from it, send an

email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/e1a6111e-70a3-412a-a666-da61c479ee53%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

<Screenshot from 2015-02-17 14:51:10.png>

--

You received this message because you are subscribed to a topic in the

Google Groups "elasticsearch" group.

To unsubscribe from this topic, visit

https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe.

To unsubscribe from this group and all its topics, send an email to

elasticsearc...@googlegroups.com.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/E00B59E6-1D1C-4727-AD0F-ABA5291D0E56%40pilato.fr.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGqur9RWW%2BorDGDCji9cUvp7Y2XmcT8D4M%3DCLx7%2BkiWofg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/iB--QW6ds-Y/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/91d7e258-6886-4344-b990-5ca4d0b2888c%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEE%2BrGr9Gd6p3n_TC_gTg4yXo1tdK%2B_EKZzD-5t8-y5gjX4RzQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.