Too many open files and other problems

Hi,

First I had problems with 'too many open files' warnings. When I finally
managed to increase the file limit

[10:56:44,855][INFO ][bootstrap ] max_open_files [249978]

other warnings occured like

[WARN ][cluster.action.shard ] [Gaea] received shard failed for
[logstash-2012.11.06][0], node[328qzX6jQaKGllc1TsOM2Q], [P],
s[INITIALIZING], reason [Failed to create shard, message
[IndexShardCreationException[[logstash-2012.11.06][0] failed to create
shard]; nested: IOException[directory
'/opt/data/elasticsearch/nodes/1/indices/logstash-2012.11.06/0/index'
exists and is a directory, but cannot be listed: list() returned null]; ]]

and

[WARN ][cluster.action.shard ] [Gaea] received shard failed for
[logstash-2012.11.06][0], node[328qzX6jQaKGllc1TsOM2Q], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.06][0] shard
allocated for local recovery (post api), should exists, but doesn't]]]

and still a too many open files warning:

[WARN ][cluster.action.shard ] [Gaea] received shard failed for
[logstash-2012.11.07][4], node[328qzX6jQaKGllc1TsOM2Q], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.07][4] failed
recovery]; nested: EngineCreationFailureException[[logstash-2012.11.07][4]
Failed to open reader on writer]; nested:
FileNotFoundException[/opt/data/elasticsearch/nodes/1/indices/logstash-2012.11.07/4/index/_h.fdx
(Too many open files)]; ]]

I'm using the default configuration. What is going wrong?

Thanks, Ulli

--

Hello Ulli,

I have a few questions about your setup:

  • How many indices you have in Elasticsearch?
  • Do you have only one node or more?
  • Do you use the default "elasticsearch" cluster name? If so, can you
    make sure that there are no other Elasticsearch servers in the same
    [multicast-enabled] network that unintentionally joined your cluster?
    I got that situation once while testing and it was... let's just say
    funny :slight_smile:
  • How did you increase the max_open_files limit? I'm interested if it
    got reset in the meantime, since 249978 is a pretty high number to get
    such warnings.
  • Does the user that runs ES have permissions over directories like
    /opt/data/elasticsearch/nodes/1/indices/logstash-2012.11.06/0/index ?
    This also implies that it has execute (and maybe also read) over all
    the parent directories. Or is that directory just gone or empty?

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

On Thu, Nov 8, 2012 at 12:33 PM, Ulli ulli.scheel@gmx.de wrote:

Hi,

First I had problems with 'too many open files' warnings. When I finally
managed to increase the file limit

[10:56:44,855][INFO ][bootstrap ] max_open_files [249978]

other warnings occured like

[WARN ][cluster.action.shard ] [Gaea] received shard failed for
[logstash-2012.11.06][0], node[328qzX6jQaKGllc1TsOM2Q], [P],
s[INITIALIZING], reason [Failed to create shard, message
[IndexShardCreationException[[logstash-2012.11.06][0] failed to create
shard]; nested: IOException[directory
'/opt/data/elasticsearch/nodes/1/indices/logstash-2012.11.06/0/index' exists
and is a directory, but cannot be listed: list() returned null]; ]]

and

[WARN ][cluster.action.shard ] [Gaea] received shard failed for
[logstash-2012.11.06][0], node[328qzX6jQaKGllc1TsOM2Q], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.06][0] shard allocated
for local recovery (post api), should exists, but doesn't]]]

and still a too many open files warning:

[WARN ][cluster.action.shard ] [Gaea] received shard failed for
[logstash-2012.11.07][4], node[328qzX6jQaKGllc1TsOM2Q], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.07][4] failed
recovery]; nested: EngineCreationFailureException[[logstash-2012.11.07][4]
Failed to open reader on writer]; nested:
FileNotFoundException[/opt/data/elasticsearch/nodes/1/indices/logstash-2012.11.07/4/index/_h.fdx
(Too many open files)]; ]]

I'm using the default configuration. What is going wrong?

Thanks, Ulli

--

--

Hello Radu,

There only is one node with one index.
I use the default cluster name but there is no other server.
I set the file limit in the bin/elasticsearch script by 'ulimit -n 250000'.
I run ES as root user so I have all permissions.
The index directory is not empty but there is a 'write.lock' (after
stopping ES). May this be a problem?

Am Donnerstag, 8. November 2012 12:30:05 UTC+1 schrieb Radu Gheorghe:

Hello Ulli,

I have a few questions about your setup:

  • How many indices you have in Elasticsearch?
  • Do you have only one node or more?
  • Do you use the default "elasticsearch" cluster name? If so, can you
    make sure that there are no other Elasticsearch servers in the same
    [multicast-enabled] network that unintentionally joined your cluster?
    I got that situation once while testing and it was... let's just say
    funny :slight_smile:
  • How did you increase the max_open_files limit? I'm interested if it
    got reset in the meantime, since 249978 is a pretty high number to get
    such warnings.
  • Does the user that runs ES have permissions over directories like
    /opt/data/elasticsearch/nodes/1/indices/logstash-2012.11.06/0/index ?
    This also implies that it has execute (and maybe also read) over all
    the parent directories. Or is that directory just gone or empty?

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

On Thu, Nov 8, 2012 at 12:33 PM, Ulli <ulli....@gmx.de <javascript:>>
wrote:

Hi,

First I had problems with 'too many open files' warnings. When I finally
managed to increase the file limit

[10:56:44,855][INFO ][bootstrap ] max_open_files [249978]

other warnings occured like

[WARN ][cluster.action.shard ] [Gaea] received shard failed for
[logstash-2012.11.06][0], node[328qzX6jQaKGllc1TsOM2Q], [P],
s[INITIALIZING], reason [Failed to create shard, message
[IndexShardCreationException[[logstash-2012.11.06][0] failed to create
shard]; nested: IOException[directory
'/opt/data/elasticsearch/nodes/1/indices/logstash-2012.11.06/0/index'
exists
and is a directory, but cannot be listed: list() returned null]; ]]

and

[WARN ][cluster.action.shard ] [Gaea] received shard failed for
[logstash-2012.11.06][0], node[328qzX6jQaKGllc1TsOM2Q], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.06][0] shard
allocated
for local recovery (post api), should exists, but doesn't]]]

and still a too many open files warning:

[WARN ][cluster.action.shard ] [Gaea] received shard failed for
[logstash-2012.11.07][4], node[328qzX6jQaKGllc1TsOM2Q], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.07][4] failed
recovery]; nested:
EngineCreationFailureException[[logstash-2012.11.07][4]
Failed to open reader on writer]; nested:

FileNotFoundException[/opt/data/elasticsearch/nodes/1/indices/logstash-2012.11.07/4/index/_h.fdx

(Too many open files)]; ]]

I'm using the default configuration. What is going wrong?

Thanks, Ulli

--

--

I've made a mistake. There is one node with four indices.

Am Donnerstag, 8. November 2012 13:15:07 UTC+1 schrieb Ulli:

Hello Radu,

There only is one node with one index.

--

Hello Ulli,

Maybe it's just me, but so far I still don't get what's going on.
Could you post a full log of what happens when you start up your node,
up to the point where you start getting errors? Including the errors,
of course.

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

On Thu, Nov 8, 2012 at 2:15 PM, Ulli ulli.scheel@gmx.de wrote:

Hello Radu,

There only is one node with one index.
I use the default cluster name but there is no other server.
I set the file limit in the bin/elasticsearch script by 'ulimit -n 250000'.
I run ES as root user so I have all permissions.
The index directory is not empty but there is a 'write.lock' (after stopping
ES). May this be a problem?

Am Donnerstag, 8. November 2012 12:30:05 UTC+1 schrieb Radu Gheorghe:

Hello Ulli,

I have a few questions about your setup:

  • How many indices you have in Elasticsearch?
  • Do you have only one node or more?
  • Do you use the default "elasticsearch" cluster name? If so, can you
    make sure that there are no other Elasticsearch servers in the same
    [multicast-enabled] network that unintentionally joined your cluster?
    I got that situation once while testing and it was... let's just say
    funny :slight_smile:
  • How did you increase the max_open_files limit? I'm interested if it
    got reset in the meantime, since 249978 is a pretty high number to get
    such warnings.
  • Does the user that runs ES have permissions over directories like
    /opt/data/elasticsearch/nodes/1/indices/logstash-2012.11.06/0/index ?
    This also implies that it has execute (and maybe also read) over all
    the parent directories. Or is that directory just gone or empty?

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

On Thu, Nov 8, 2012 at 12:33 PM, Ulli ulli....@gmx.de wrote:

Hi,

First I had problems with 'too many open files' warnings. When I finally
managed to increase the file limit

[10:56:44,855][INFO ][bootstrap ] max_open_files [249978]

other warnings occured like

[WARN ][cluster.action.shard ] [Gaea] received shard failed for
[logstash-2012.11.06][0], node[328qzX6jQaKGllc1TsOM2Q], [P],
s[INITIALIZING], reason [Failed to create shard, message
[IndexShardCreationException[[logstash-2012.11.06][0] failed to create
shard]; nested: IOException[directory
'/opt/data/elasticsearch/nodes/1/indices/logstash-2012.11.06/0/index'
exists
and is a directory, but cannot be listed: list() returned null]; ]]

and

[WARN ][cluster.action.shard ] [Gaea] received shard failed for
[logstash-2012.11.06][0], node[328qzX6jQaKGllc1TsOM2Q], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.06][0] shard
allocated
for local recovery (post api), should exists, but doesn't]]]

and still a too many open files warning:

[WARN ][cluster.action.shard ] [Gaea] received shard failed for
[logstash-2012.11.07][4], node[328qzX6jQaKGllc1TsOM2Q], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.07][4] failed
recovery]; nested:
EngineCreationFailureException[[logstash-2012.11.07][4]
Failed to open reader on writer]; nested:

FileNotFoundException[/opt/data/elasticsearch/nodes/1/indices/logstash-2012.11.07/4/index/_h.fdx
(Too many open files)]; ]]

I'm using the default configuration. What is going wrong?

Thanks, Ulli

--

--

--

I can't attach it (When communicating with the server 340-error has
occurred.). So I post it directly:

[10:56:44,855][INFO ][bootstrap ] max_open_files [249978]
[10:56:44,876][INFO ][node ] [Gaea] {0.19.10}[32244]:
initializing ...
[10:56:44,880][INFO ][plugins ] [Gaea] loaded [], sites []
[10:56:45,735][DEBUG][discovery.zen.ping.multicast] [Gaea] using group
[224.2.2.4], with port [54328], ttl [3], and address [null]
[10:56:45,738][DEBUG][discovery.zen.ping.unicast] [Gaea] using initial
hosts [], with concurrent_connects [10]
[10:56:45,738][DEBUG][discovery.zen ] [Gaea] using ping.timeout
[3s], master_election.filter_client [true], master_election.filter_data
[false]
[10:56:45,742][DEBUG][discovery.zen.elect ] [Gaea] using
minimum_master_nodes [-1]
[10:56:45,743][DEBUG][discovery.zen.fd ] [Gaea] [master] uses
ping_interval [1s], ping_timeout [30s], ping_retries [3]
[10:56:45,745][DEBUG][discovery.zen.fd ] [Gaea] [node ] uses
ping_interval [1s], ping_timeout [30s], ping_retries [3]
[10:56:46,528][DEBUG][gateway.local ] [Gaea] using
initial_shards [quorum], list_timeout [30s]
[10:56:46,655][DEBUG][gateway.local.state.meta ] [Gaea] using
gateway.local.auto_import_dangled [YES], with
gateway.local.dangling_timeout [2h]
[10:56:46,655][DEBUG][gateway.local.state.meta ] [Gaea] took 0s to load
state
[10:56:46,655][DEBUG][gateway.local.state.shards] [Gaea] took 0s to load
started shards state
[10:56:46,658][INFO ][node ] [Gaea] {0.19.10}[32244]:
initialized
[10:56:46,658][INFO ][node ] [Gaea] {0.19.10}[32244]:
starting ...
[10:56:46,731][INFO ][transport ] [Gaea] bound_address
{inet[/0.0.0.0:9300]}, publish_address {inet[/172.22.52.34:9300]}
[10:56:49,756][DEBUG][discovery.zen ] [Gaea] filtered ping
responses: (filter_client[true], filter_data[false]) {none}
[10:56:49,758][INFO ][cluster.service ] [Gaea] new_master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]], reason:
zen-disco-join (elected_as_master)
[10:56:49,770][INFO ][discovery ] [Gaea]
elasticsearch/Ovip2gcHSgC85ZnoDZfJFQ
[10:56:49,781][INFO ][http ] [Gaea] bound_address
{inet[/0.0.0.0:9200]}, publish_address {inet[/172.22.52.34:9200]}
[10:56:49,782][INFO ][node ] [Gaea] {0.19.10}[32244]:
started
[10:56:49,840][INFO ][gateway ] [Gaea] recovered [0]
indices into cluster_state
[10:56:50,490][INFO ][cluster.service ] [Gaea] added {[Stryker,
William][QUbkMX1cTyy93GqnZEPRbg][inet[/172.22.52.34:9304]]{client=true,
data=false},}, reason: zen-disco-receive(join from node[[Stryker,
William][QUbkMX1cTyy93GqnZEPRbg][inet[/172.22.52.34:9304]]{client=true,
data=false}])
[10:58:36,160][INFO ][cluster.service ] [Gaea] added {[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]],}, reason:
zen-disco-receive(join from node[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]])
[10:58:36,202][INFO ][gateway.local.state.meta ] [Gaea] auto importing
dangled indices
[logstash-2012.11.06/OPEN][logstash-2012.11.05/OPEN][logstash-2012.11.08/OPEN][logstash-2012.11.07/OPEN]
from [[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:36,209][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][3]: allocating [[logstash-2012.11.08][3], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:36,221][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][3]: allocating [[logstash-2012.11.06][3], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:36,222][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][0]: allocating [[logstash-2012.11.08][0], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:36,228][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][2]: allocating [[logstash-2012.11.06][2], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:36,230][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,233][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][4]: throttling allocation [[logstash-2012.11.05][4],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,235][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][2]: throttling allocation [[logstash-2012.11.05][2],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,236][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,241][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][1]: throttling allocation [[logstash-2012.11.05][1],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,241][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,242][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][2]: throttling allocation [[logstash-2012.11.08][2],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,243][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,244][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,246][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][3]: throttling allocation [[logstash-2012.11.05][3],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,247][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][3]: throttling allocation [[logstash-2012.11.07][3],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,248][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][2]: throttling allocation [[logstash-2012.11.07][2],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,248][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,255][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][0]: throttling allocation [[logstash-2012.11.05][0],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,256][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,259][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][1]: throttling allocation [[logstash-2012.11.06][1],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,650][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.08][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [after recovery from gateway]
[10:58:36,651][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.08][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING]], reason [after recovery from gateway]
[10:58:36,652][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,652][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][1]: allocating [[logstash-2012.11.05][1], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:36,652][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][1]: throttling allocation [[logstash-2012.11.06][1],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,652][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][2]: throttling allocation [[logstash-2012.11.05][2],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,652][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][3]: throttling allocation [[logstash-2012.11.07][3],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,652][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][2]: throttling allocation [[logstash-2012.11.07][2],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,652][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,652][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,652][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][4]: throttling allocation [[logstash-2012.11.05][4],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,653][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][3]: throttling allocation [[logstash-2012.11.05][3],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,653][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,653][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,653][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][0]: throttling allocation [[logstash-2012.11.05][0],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,653][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][2]: throttling allocation [[logstash-2012.11.08][2],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,653][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,653][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,687][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [after recovery from gateway]
[10:58:36,722][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [after recovery from gateway]
[10:58:36,801][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:36,805][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:36,865][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][1], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [after recovery from gateway]
[10:58:36,917][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.08][0], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [after recovery from gateway]
[10:58:36,942][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.08][3] state: [CREATED]
[10:58:36,946][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.08][3] state: [CREATED]->[RECOVERING], reason [from
[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:36,947][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.06][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], [logstash-2012.11.06][3], node[rDKyOTqpTeStQirDZ-yBMg],
[P], s[INITIALIZING], [logstash-2012.11.06][3],
node[rDKyOTqpTeStQirDZ-yBMg], [P], s[INITIALIZING],
[logstash-2012.11.06][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], [logstash-2012.11.05][1], node[rDKyOTqpTeStQirDZ-yBMg],
[P], s[INITIALIZING], [logstash-2012.11.08][0],
node[rDKyOTqpTeStQirDZ-yBMg], [P], s[INITIALIZING]], reason [after recovery
from gateway]
[10:58:36,948][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][2]: allocating [[logstash-2012.11.07][2], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:36,948][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][2]: allocating [[logstash-2012.11.08][2], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:36,948][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][2]: allocating [[logstash-2012.11.05][2], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:36,948][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,948][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,948][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][4]: allocating [[logstash-2012.11.05][4], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:36,948][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][3]: throttling allocation [[logstash-2012.11.05][3],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,949][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][1]: throttling allocation [[logstash-2012.11.06][1],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,949][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,949][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,949][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,949][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,949][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][0]: throttling allocation [[logstash-2012.11.05][0],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,949][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][3]: throttling allocation [[logstash-2012.11.07][3],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:36,949][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:36,993][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.08][0] state: [CREATED]
[10:58:36,994][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.08][0] state: [CREATED]->[RECOVERING], reason [from
[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:37,120][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.08][0] state: [RECOVERING]->[STARTED], reason [post
recovery]
[10:58:37,121][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.08][0] scheduling refresher every 1s
[10:58:37,121][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.08][0] scheduling optimizer / merger every 1s
[10:58:37,123][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.08][3] state: [RECOVERING]->[STARTED], reason [post
recovery]
[10:58:37,123][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.08][3] scheduling refresher every 1s
[10:58:37,123][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.08][3] scheduling optimizer / merger every 1s
[10:58:37,126][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.07][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [after recovery from gateway]
[10:58:37,127][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.07][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING]], reason [after recovery from gateway]
[10:58:37,127][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,127][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][0]: allocating [[logstash-2012.11.05][0], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:37,127][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,127][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,127][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,128][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,128][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.08][0], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,128][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][3]: throttling allocation [[logstash-2012.11.07][3],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:37,128][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.08][0], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,128][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][1]: throttling allocation [[logstash-2012.11.06][1],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:37,128][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,128][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,128][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][3]: throttling allocation [[logstash-2012.11.05][3],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:37,129][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.08][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,129][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.08][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,132][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.08][0], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,132][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.08][0], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,132][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.08][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,132][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.08][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,133][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.08][0], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], [logstash-2012.11.08][3], node[Ovip2gcHSgC85ZnoDZfJFQ],
[R], s[INITIALIZING], [logstash-2012.11.08][0],
node[Ovip2gcHSgC85ZnoDZfJFQ], [R], s[INITIALIZING],
[logstash-2012.11.08][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING]], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,133][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,133][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][3]: throttling allocation [[logstash-2012.11.05][3],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:37,133][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][3]: throttling allocation [[logstash-2012.11.07][3],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:37,133][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,133][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,133][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,133][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,134][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,134][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,134][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][1]: throttling allocation [[logstash-2012.11.06][1],
node[null], [P], s[UNASSIGNED]] to [[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]] on primary
allocation
[10:58:37,149][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [after recovery from gateway]
[10:58:37,175][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][4], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [after recovery from gateway]
[10:58:37,189][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,220][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][3] state: [CREATED]
[10:58:37,221][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][3] state: [CREATED]->[RECOVERING], reason [from
[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:37,249][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][3] state: [RECOVERING]->[STARTED], reason [post
recovery]
[10:58:37,250][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][3] scheduling refresher every 1s
[10:58:37,250][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][3] scheduling optimizer / merger every 1s
[10:58:37,250][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.06][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,250][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,258][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,258][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][4], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,260][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][1] state: [CREATED]
[10:58:37,261][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][1] state: [CREATED]->[RECOVERING], reason [from
[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:37,265][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][0], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [after recovery from gateway]
[10:58:37,271][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,279][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][1] state: [RECOVERING]->[STARTED], reason [post
recovery]
[10:58:37,279][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][1] scheduling refresher every 1s
[10:58:37,279][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][1] scheduling optimizer / merger every 1s
[10:58:37,280][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.05][1], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,280][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][1], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,280][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.05][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], [logstash-2012.11.05][4], node[rDKyOTqpTeStQirDZ-yBMg],
[P], s[INITIALIZING], [logstash-2012.11.06][3],
node[Ovip2gcHSgC85ZnoDZfJFQ], [R], s[INITIALIZING],
[logstash-2012.11.05][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], [logstash-2012.11.05][4], node[rDKyOTqpTeStQirDZ-yBMg],
[P], s[INITIALIZING], [logstash-2012.11.05][0],
node[rDKyOTqpTeStQirDZ-yBMg], [P], s[INITIALIZING],
[logstash-2012.11.05][1], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING]], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,281][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,281][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,281][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,281][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,281][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,281][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.05][3]: allocating [[logstash-2012.11.05][3], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:37,281][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][1]: allocating [[logstash-2012.11.06][1], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:37,282][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][3]: allocating [[logstash-2012.11.07][3], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:37,282][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,282][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,313][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][0], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,314][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][4], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,314][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,314][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][2] state: [CREATED]
[10:58:37,315][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][2] state: [CREATED]->[RECOVERING], reason [from
[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:37,320][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,342][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][4] state: [CREATED]
[10:58:37,342][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][4] state: [CREATED]->[RECOVERING], reason [from
[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:37,345][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.05][4], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING]], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,345][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,345][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,345][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,346][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,346][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,346][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,346][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,346][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,350][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,353][DEBUG][cluster.action.shard ] [Gaea] Applying failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,354][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,354][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,358][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][2] state: [RECOVERING]->[STARTED], reason [post
recovery]
[10:58:37,358][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][2] scheduling refresher every 1s
[10:58:37,358][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][2] scheduling optimizer / merger every 1s
[10:58:37,359][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.05][2], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,359][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][2], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,359][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.05][2], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING]], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,359][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,359][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][2]: allocating [[logstash-2012.11.08][2], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:37,360][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,360][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,360][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,360][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,360][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,360][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,390][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][0] state: [CREATED]
[10:58:37,393][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][0] state: [CREATED]->[RECOVERING], reason [from
[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:37,394][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][4] state: [RECOVERING]->[STARTED], reason [post
recovery]
[10:58:37,395][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][4] scheduling refresher every 1s
[10:58:37,395][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][4] scheduling optimizer / merger every 1s
[10:58:37,398][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.05][4], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,398][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][4], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,398][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.05][4], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING]], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,398][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,398][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,398][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,399][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,399][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,399][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,399][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,413][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][0] state: [RECOVERING]->[STARTED], reason [post
recovery]
[10:58:37,414][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][0] scheduling refresher every 1s
[10:58:37,414][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][0] scheduling optimizer / merger every 1s
[10:58:37,422][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.05][0], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,422][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][0], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,447][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.07][2] state: [CREATED]
[10:58:37,447][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.07][2] state: [CREATED]->[RECOVERING], reason [from
[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:37,447][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.05][0], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,447][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][0], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,450][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.05][0], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], [logstash-2012.11.05][0], node[Ovip2gcHSgC85ZnoDZfJFQ],
[R], s[INITIALIZING]], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,451][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,451][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,451][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,451][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,451][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,451][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,451][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,480][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][2] state: [CREATED]
[10:58:37,480][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][2] state: [CREATED]->[RECOVERING], reason [from
[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:37,506][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.07][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [after recovery from gateway]
[10:58:37,507][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.07][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING]], reason [after recovery from gateway]
[10:58:37,507][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,507][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,507][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,507][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,507][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,507][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,507][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,526][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][1], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [after recovery from gateway]
[10:58:37,526][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.06][1], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING]], reason [after recovery from gateway]
[10:58:37,527][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,527][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,527][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,527][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,527][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,528][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,528][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,562][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [after recovery from gateway]
[10:58:37,562][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.05][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING]], reason [after recovery from gateway]
[10:58:37,563][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,563][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,563][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,564][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,564][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,564][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,564][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,574][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,574][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.07][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,574][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,574][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,576][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,576][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,576][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,574][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,575][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][1], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,576][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,576][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,577][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,583][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.07][2] state: [RECOVERING]->[STARTED], reason [post
recovery]
[10:58:37,583][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.07][2] scheduling refresher every 1s
[10:58:37,583][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][2] state: [RECOVERING]->[STARTED], reason [post
recovery]
[10:58:37,583][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.07][2] scheduling optimizer / merger every 1s
[10:58:37,583][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][2] scheduling refresher every 1s
[10:58:37,583][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][2] scheduling optimizer / merger every 1s
[10:58:37,584][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.06][2], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,584][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,584][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][2], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,584][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.07][2], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,584][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.07][2], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,586][DEBUG][cluster.action.shard ] [Gaea] Applying failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,587][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.07][2], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,587][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.07][2], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,587][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.06][2], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,587][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][2], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,588][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.05][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], [logstash-2012.11.06][2], node[Ovip2gcHSgC85ZnoDZfJFQ],
[R], s[INITIALIZING], [logstash-2012.11.07][2],
node[Ovip2gcHSgC85ZnoDZfJFQ], [R], s[INITIALIZING],
[logstash-2012.11.07][2], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], [logstash-2012.11.06][2], node[Ovip2gcHSgC85ZnoDZfJFQ],
[R], s[INITIALIZING]], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,588][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,588][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,588][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,588][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,588][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][2]: allocating [[logstash-2012.11.08][2], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:37,588][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,588][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,590][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,610][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.07][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,617][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,621][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,621][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][1], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,640][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.07][3] state: [CREATED]
[10:58:37,640][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.07][3] state: [CREATED]->[RECOVERING], reason [from
[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:37,666][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][3] state: [CREATED]
[10:58:37,666][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][3] state: [CREATED]->[RECOVERING], reason [from
[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:37,668][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.05][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING]], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,668][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,669][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,669][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,669][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,669][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,669][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,669][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,669][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,670][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][1], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,671][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,673][DEBUG][cluster.action.shard ] [Gaea] Applying failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,682][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.07][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,683][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,692][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,692][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,757][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.07][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,761][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,761][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][1], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,762][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,765][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.06][1], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING]], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,766][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,770][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][3] state: [RECOVERING]->[STARTED], reason [post
recovery]
[10:58:37,770][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][3] scheduling refresher every 1s
[10:58:37,770][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.05][3] scheduling optimizer / merger every 1s
[10:58:37,774][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.07][3] state: [RECOVERING]->[STARTED], reason [post
recovery]
[10:58:37,774][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.07][3] scheduling refresher every 1s
[10:58:37,774][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.07][3] scheduling optimizer / merger every 1s
[10:58:37,774][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.05][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,774][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,774][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.05][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING]], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,774][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,774][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,775][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,775][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,775][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,775][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][2]: allocating [[logstash-2012.11.08][2], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:37,775][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,775][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,777][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.07][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,777][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.07][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,782][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.07][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,782][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.07][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,804][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][1] state: [CREATED]
[10:58:37,804][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][1] state: [CREATED]->[RECOVERING], reason [from
[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]
[10:58:37,810][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][1], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,810][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,810][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.07][3], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], [logstash-2012.11.07][3], node[Ovip2gcHSgC85ZnoDZfJFQ],
[R], s[INITIALIZING], [logstash-2012.11.05][3],
node[rDKyOTqpTeStQirDZ-yBMg], [P], s[INITIALIZING]], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,811][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,811][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,811][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,811][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,811][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,811][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,811][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,819][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,819][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,819][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,819][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,820][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,820][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,820][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,820][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,820][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,824][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,825][DEBUG][cluster.action.shard ] [Gaea] Applying failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,839][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][1] state: [RECOVERING]->[STARTED], reason [post
recovery]
[10:58:37,839][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][1] scheduling refresher every 1s
[10:58:37,839][DEBUG][index.shard.service ] [Gaea]
[logstash-2012.11.06][1] scheduling optimizer / merger every 1s
[10:58:37,840][DEBUG][cluster.action.shard ] [Gaea] sending shard
started for [logstash-2012.11.06][1], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,840][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.06][1], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,840][DEBUG][cluster.action.shard ] [Gaea] applying started
shards [[logstash-2012.11.06][1], node[Ovip2gcHSgC85ZnoDZfJFQ], [R],
s[INITIALIZING]], reason [after recovery (replica) from node [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]]
[10:58:37,840][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,840][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,841][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,841][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,841][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,841][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][2]: allocating [[logstash-2012.11.08][2], node[null],
[P], s[UNASSIGNED]] to [[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]] on primary
allocation
[10:58:37,841][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,841][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,850][DEBUG][cluster.action.shard ] [Gaea] received shard
started for [logstash-2012.11.05][3], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]] marked shard as
initializing, but shard already started, mark shard as started]
[10:58:37,854][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,854][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,854][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,854][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,854][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,854][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.07][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,854][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,854][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,855][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.06][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,855][DEBUG][gateway.local ] [Gaea]
[logstash-2012.11.08][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[10:58:37,857][DEBUG][cluster.action.shard ] [Gaea] Applying failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,876][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,876][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,899][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,899][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,924][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,924][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,945][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,945][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,966][WARN ][cluster.action.shard ] [Gaea] received shard
failed for [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]
[10:58:37,967][DEBUG][cluster.action.shard ] [Gaea] Received failed
shard [logstash-2012.11.08][2], node[rDKyOTqpTeStQirDZ-yBMg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[logstash-2012.11.08][2] shard
allocated for local recovery (post api), should exists, but doesn't]]]

--

Hello Ulli,

I don't see any more "too many open files" errors, which I think is nice :slight_smile:

You seem to start more than one Elasticsearch instance on the same
machine. So this would be your master (the one where the log is):

[10:56:49,758][INFO ][cluster.service ] [Gaea] new_master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]], reason:
zen-disco-join (elected_as_master)

Then another instance is joining the cluster, like:

[10:58:36,160][INFO ][cluster.service ] [Gaea] added {[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]],}, reason:
zen-disco-receive(join from node[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]])

I assume both have the same configuration, thus the same data
directory. And I guess (haven't tested, though) this could lead to
unexpected results when importing dangling indices:

[10:58:36,202][INFO ][gateway.local.state.meta ] [Gaea] auto importing
dangled indices
[logstash-2012.11.06/OPEN][logstash-2012.11.05/OPEN][logstash-2012.11.08/OPEN][logstash-2012.11.07/OPEN]
from [[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]

Are you using Logstash Agent in embedded mode? That might explain why
there seem to be two nodes while you only purposely started
Elasticsearch once. AFAIK in embedded mode, Logstash will start a node
with data:true.

At this point, I'd make sure to start only one Elasticsearch node and
let it allocate all the shards it can allocate. You can use
Elasticsearch Head to have a better view of this:
http://mobz.github.com/elasticsearch-head/

When everything seems to settle (no more shards are recovering), I'd
delete indices which still have unallocated primary shards. So with
the default settings you'd have 5 shards per index and one replica per
shard. So with a single node you should see 5 shards allocated and 5
replicas unallocated. That would be the normal situation - since it
doesn't make sense to allocate a replica on the same node as the
primary shard.

As far as I understand from the log, if a shard can't be recovered,
that means data is not there for some reason. So I'd delete the
affected index if I'd afford to:

curl -XDELETE localhost:9200/index_name_goes_here

I'd repeat the process on every "partial" index until the cluster
state becomes yellow (from red) on Elasticsearch Head. That means
every primary shard is allocated. Green means all replicas are
allocated as well, but that's not the case for a single node.

Best regards,
Radu

http://sematext.com/ -- ElasticSearch -- Solr -- Lucene

--

Hello Radu,

You are right. I have just already detected that there was still an
embedded elasticsearch running. I changed that and now it looks good.

Thank you, Ulli

2012/11/9 Radu Gheorghe radu.gheorghe@sematext.com

Hello Ulli,

I don't see any more "too many open files" errors, which I think is nice :slight_smile:

You seem to start more than one Elasticsearch instance on the same
machine. So this would be your master (the one where the log is):

[10:56:49,758][INFO ][cluster.service ] [Gaea] new_master
[Gaea][Ovip2gcHSgC85ZnoDZfJFQ][inet[/172.22.52.34:9300]], reason:
zen-disco-join (elected_as_master)

Then another instance is joining the cluster, like:

[10:58:36,160][INFO ][cluster.service ] [Gaea] added {[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]],}, reason:
zen-disco-receive(join from node[[Kiden
Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]])

I assume both have the same configuration, thus the same data
directory. And I guess (haven't tested, though) this could lead to
unexpected results when importing dangling indices:

[10:58:36,202][INFO ][gateway.local.state.meta ] [Gaea] auto importing
dangled indices

[logstash-2012.11.06/OPEN][logstash-2012.11.05/OPEN][logstash-2012.11.08/OPEN][logstash-2012.11.07/OPEN]
from [[Kiden Nixon][rDKyOTqpTeStQirDZ-yBMg][inet[/172.22.52.34:9301]]]

Are you using Logstash Agent in embedded mode? That might explain why
there seem to be two nodes while you only purposely started
Elasticsearch once. AFAIK in embedded mode, Logstash will start a node
with data:true.

At this point, I'd make sure to start only one Elasticsearch node and
let it allocate all the shards it can allocate. You can use
Elasticsearch Head to have a better view of this:
http://mobz.github.com/elasticsearch-head/

When everything seems to settle (no more shards are recovering), I'd
delete indices which still have unallocated primary shards. So with
the default settings you'd have 5 shards per index and one replica per
shard. So with a single node you should see 5 shards allocated and 5
replicas unallocated. That would be the normal situation - since it
doesn't make sense to allocate a replica on the same node as the
primary shard.

As far as I understand from the log, if a shard can't be recovered,
that means data is not there for some reason. So I'd delete the
affected index if I'd afford to:

curl -XDELETE localhost:9200/index_name_goes_here

I'd repeat the process on every "partial" index until the cluster
state becomes yellow (from red) on Elasticsearch Head. That means
every primary shard is allocated. Green means all replicas are
allocated as well, but that's not the case for a single node.

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

--

--

Hi again,

Although I raised the file limit to 400000 I get 'too many open files'
exceptions after a while:

[10:07:34,973][WARN ][cluster.action.shard ] [Garrett, Jonathan "John"]
received shard failed for [logstash-2012.11.12][2],
node[ngYc86QCSLeWWAVd_dY8qw], [R], s[INITIALIZING], reason [Failed to start
shard, message [RecoveryFailedException[[logstash-2012.11.12][2]: Recovery
failed from [Garrett, Jonathan
"John"][q8v8YxXRRZehUUiCfkmqaw][inet[/172.22.52.34:9300]] into [Eric
Slaughter][ngYc86QCSLeWWAVd_dY8qw][inet[/172.22.52.34:9301]]]; nested:
RemoteTransportException[[Garrett, Jonathan
"John"][inet[/172.22.52.34:9300]][index/shard/recovery/startRecovery]];
nested: RecoveryEngineException[[logstash-2012.11.12][2] Phase[2] Execution
failed]; nested: RemoteTransportException[[Eric
Slaughter][inet[/172.22.52.34:9301]][index/shard/recovery/prepareTranslog]];
nested: EngineCreationFailureException[[logstash-2012.11.12][2] Failed to
open reader on writer]; nested:
FileNotFoundException[/opt/data/elasticsearch/nodes/0/indices/logstash-2012.11.12/2/index/_4.prx
(Too many open files)]; ]]

Which file limit elasticsearch needs? How can I reduce tne number of file
elasticsearch needs?

Thanks in advance. Regards,
Ulli

--

Hello Ulli,

The number of files Elasticsearch needs depends on your total number
of segments. A segment is a chunk of a Lucene index - and that would
be your shards in ES. Shards, in turn, are chunks of ES indices.

So if you keep your logstash logs in daily indices for 50 days, and
each index would have the default 5 shards, and each shard would have
100 segments, then you can expect at least 25K opened files.

Possible solutions:

  • double-check that your max-open-files limit is actually applied:

curl -XGET 'http://localhost:9200/_cluster/nodes?process=true&pretty=true'

2>/dev/null | grep max_file_descriptors

  • do an optimize[0] with a low number of segments. Please note that it
    will take some time and some IO load, especially if you have a lot of
    data. Something like this will do it for all your indices (although
    it's normally not necessary for the current day's index, but for the
    other ones it should also make searches faster):

curl -XPOST 'http://localhost:9200/_optimize?num_segments=3'

For example, this would put a new template to make all your indices
have one shard:

curl -XPUT localhost:9200/_template/template_1 -d '

{
"template" : "logstash-*",
"settings" : {
"number_of_shards" : 1
}
}'

[0] Elasticsearch Platform — Find real-time answers at scale | Elastic

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

On Mon, Nov 12, 2012 at 1:35 PM, Ulli ulli.scheel@gmx.de wrote:

Hi again,

Although I raised the file limit to 400000 I get 'too many open files'
exceptions after a while:

[10:07:34,973][WARN ][cluster.action.shard ] [Garrett, Jonathan "John"]
received shard failed for [logstash-2012.11.12][2],
node[ngYc86QCSLeWWAVd_dY8qw], [R], s[INITIALIZING], reason [Failed to start
shard, message [RecoveryFailedException[[logstash-2012.11.12][2]: Recovery
failed from [Garrett, Jonathan
"John"][q8v8YxXRRZehUUiCfkmqaw][inet[/172.22.52.34:9300]] into [Eric
Slaughter][ngYc86QCSLeWWAVd_dY8qw][inet[/172.22.52.34:9301]]]; nested:
RemoteTransportException[[Garrett, Jonathan
"John"][inet[/172.22.52.34:9300]][index/shard/recovery/startRecovery]];
nested: RecoveryEngineException[[logstash-2012.11.12][2] Phase[2] Execution
failed]; nested: RemoteTransportException[[Eric
Slaughter][inet[/172.22.52.34:9301]][index/shard/recovery/prepareTranslog]];
nested: EngineCreationFailureException[[logstash-2012.11.12][2] Failed to
open reader on writer]; nested:
FileNotFoundException[/opt/data/elasticsearch/nodes/0/indices/logstash-2012.11.12/2/index/_4.prx
(Too many open files)]; ]]

Which file limit elasticsearch needs? How can I reduce tne number of file
elasticsearch needs?

Thanks in advance. Regards,
Ulli

--

Hello Radu,

I looked at my problem again and discovered that the file limit is
obviously applied for the master node only. Is this a correct behavior or
what might be going wrong?

curl -XGET

'http://localhost:9200/_cluster/nodes?process=true&pretty=true'
{
"ok" : true,
"cluster_name" : "logstash-standalone-elasticsearch",
"nodes" : {
"LGqR9fy1QxCsRTqFe56hHw" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9301]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"OSX1AqtpTR6R0l2Tf_jJ5Q" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9302]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"Pu26sj4BRv-yJzY6OwqcWw" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9303]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"FrfBCYqZSkWyR8xo2kfPWg" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9300]",
"hostname" : "...",
"http_address" : "inet[/...:9200]",
"process" : {
"refresh_interval" : 1000,
"id" : 6062,
"max_file_descriptors" : 400000
}
}
}
}

Best regards
Ulli

--

Hi Ulli,

The number of open files needs to be set on each node - it's not a cluster
wide setting. I would update the settings on the other nodes and this use
this command to verify that it has been updated successfully.

Just on the other nodes, are these all embedded instances of ES on the same
server? (it looks like there are 4 of them).

Derry

On Monday, 19 November 2012 14:53:49 UTC, Ulli wrote:

Hello Radu,

I looked at my problem again and discovered that the file limit is
obviously applied for the master node only. Is this a correct behavior or
what might be going wrong?

curl -XGET '

http://localhost:9200/_cluster/nodes?process=true&pretty=true'
{
"ok" : true,
"cluster_name" : "logstash-standalone-elasticsearch",
"nodes" : {
"LGqR9fy1QxCsRTqFe56hHw" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9301]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"OSX1AqtpTR6R0l2Tf_jJ5Q" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9302]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"Pu26sj4BRv-yJzY6OwqcWw" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9303]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"FrfBCYqZSkWyR8xo2kfPWg" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9300]",
"hostname" : "...",
"http_address" : "inet[/...:9200]",
"process" : {
"refresh_interval" : 1000,
"id" : 6062,
"max_file_descriptors" : 400000
}
}
}
}

Best regards
Ulli

--

Sorry, after looking again, it seems that you have 1 master node and then 3
client only (no-data) nodes in your system.

400,000 is a very large number of file descriptors. Can you check the
actual open number on your system (ulimit -n as root on the command line?)

On Tuesday, 20 November 2012 08:35:46 UTC, Derry O' Sullivan wrote:

Hi Ulli,

The number of open files needs to be set on each node - it's not a cluster
wide setting. I would update the settings on the other nodes and this use
this command to verify that it has been updated successfully.

Just on the other nodes, are these all embedded instances of ES on the
same server? (it looks like there are 4 of them).

Derry

On Monday, 19 November 2012 14:53:49 UTC, Ulli wrote:

Hello Radu,

I looked at my problem again and discovered that the file limit is
obviously applied for the master node only. Is this a correct behavior or
what might be going wrong?

curl -XGET '

http://localhost:9200/_cluster/nodes?process=true&pretty=true'
{
"ok" : true,
"cluster_name" : "logstash-standalone-elasticsearch",
"nodes" : {
"LGqR9fy1QxCsRTqFe56hHw" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9301]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"OSX1AqtpTR6R0l2Tf_jJ5Q" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9302]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"Pu26sj4BRv-yJzY6OwqcWw" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9303]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"FrfBCYqZSkWyR8xo2kfPWg" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9300]",
"hostname" : "...",
"http_address" : "inet[/...:9200]",
"process" : {
"refresh_interval" : 1000,
"id" : 6062,
"max_file_descriptors" : 400000
}
}
}
}

Best regards
Ulli

--

The actual open files limit (not actual open number) as root on command
line is 1024. I set the file limit for the master in the elasticsearch
start script to 400,000.
Is it a problem if the clients have a limit of 1024 'only'?

Am Dienstag, 20. November 2012 09:38:51 UTC+1 schrieb Derry O' Sullivan:

Sorry, after looking again, it seems that you have 1 master node and then
3 client only (no-data) nodes in your system.

400,000 is a very large number of file descriptors. Can you check the
actual open number on your system (ulimit -n as root on the command line?)

On Tuesday, 20 November 2012 08:35:46 UTC, Derry O' Sullivan wrote:

Hi Ulli,

The number of open files needs to be set on each node - it's not a
cluster wide setting. I would update the settings on the other nodes and
this use this command to verify that it has been updated successfully.

Just on the other nodes, are these all embedded instances of ES on the
same server? (it looks like there are 4 of them).

Derry

On Monday, 19 November 2012 14:53:49 UTC, Ulli wrote:

Hello Radu,

I looked at my problem again and discovered that the file limit is
obviously applied for the master node only. Is this a correct behavior or
what might be going wrong?

curl -XGET '

http://localhost:9200/_cluster/nodes?process=true&pretty=true'
{
"ok" : true,
"cluster_name" : "logstash-standalone-elasticsearch",
"nodes" : {
"LGqR9fy1QxCsRTqFe56hHw" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9301]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"OSX1AqtpTR6R0l2Tf_jJ5Q" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9302]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"Pu26sj4BRv-yJzY6OwqcWw" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9303]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"FrfBCYqZSkWyR8xo2kfPWg" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9300]",
"hostname" : "...",
"http_address" : "inet[/...:9200]",
"process" : {
"refresh_interval" : 1000,
"id" : 6062,
"max_file_descriptors" : 400000
}
}
}
}

Best regards
Ulli

--

Hi Ulli,

You need to set the parameters on the OS as well as within elasticsearch:

The above page shows how to change the setting, test it and then you should
verify with the cluster API call (like you showed already).

400k files seems high - would test with 32/64k first.

Derry

On 20 November 2012 09:10, Ulli ulli.scheel@gmx.de wrote:

The actual open files limit (not actual open number) as root on command
line is 1024. I set the file limit for the master in the elasticsearch
start script to 400,000.
Is it a problem if the clients have a limit of 1024 'only'?

Am Dienstag, 20. November 2012 09:38:51 UTC+1 schrieb Derry O' Sullivan:

Sorry, after looking again, it seems that you have 1 master node and then
3 client only (no-data) nodes in your system.

400,000 is a very large number of file descriptors. Can you check the
actual open number on your system (ulimit -n as root on the command line?)

On Tuesday, 20 November 2012 08:35:46 UTC, Derry O' Sullivan wrote:

Hi Ulli,

The number of open files needs to be set on each node - it's not a
cluster wide setting. I would update the settings on the other nodes and
this use this command to verify that it has been updated successfully.

Just on the other nodes, are these all embedded instances of ES on the
same server? (it looks like there are 4 of them).

Derry

On Monday, 19 November 2012 14:53:49 UTC, Ulli wrote:

Hello Radu,

I looked at my problem again and discovered that the file limit is
obviously applied for the master node only. Is this a correct behavior or
what might be going wrong?

curl -XGET 'http://localhost:9200/_**cluster/nodes?process=true&**

pretty=truehttp://localhost:9200/_cluster/nodes?process=true&pretty=true'

{
"ok" : true,
"cluster_name" : "logstash-standalone-**elasticsearch",
"nodes" : {
"LGqR9fy1QxCsRTqFe56hHw" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9301]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"OSX1AqtpTR6R0l2Tf_jJ5Q" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9302]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"Pu26sj4BRv-yJzY6OwqcWw" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9303]",
"hostname" : "...",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval" : 1000,
"id" : 10187,
"max_file_descriptors" : 1024
}
},
"FrfBCYqZSkWyR8xo2kfPWg" : {
"name" : "Ulli Scheel",
"transport_address" : "inet[/...:9300]",
"hostname" : "...",
"http_address" : "inet[/...:9200]",
"process" : {
"refresh_interval" : 1000,
"id" : 6062,
"max_file_descriptors" : 400000
}
}
}
}

Best regards
Ulli

--

--

Am Dienstag, 20. November 2012 10:52:52 UTC+1 schrieb Derry O' Sullivan:

Hi Ulli,

You need to set the parameters on the OS as well as within elasticsearch:
Elasticsearch Platform — Find real-time answers at scale | Elastic

I tried this tutorial already but it has no effect (I'm starting
elasticsearch as root):

cat /etc/security/limits.conf

root soft nofile 32000
root hard nofile 32000

ulimit -Sn

1024

--

Hi Ulli,

In that link, see the section below the testing of the ulimit -Sn:

Derry

If you are logged in as elasticsearch you have to log out and log in again
to see the new limits.

If you still see the previous limit, run:

  • egrep -r pam_limits /etc/pam.d/*

and check that all pam_limits.so are not commented out.

You can check the limit that elasticsearch really has adding the flag
es.max-open-files=true, for example:

  • $ bin/elasticsearch -f -Des.max-open-files=true
    [2011-04-05 04:12:02,687][INFO ][bootstrap ]
    max_open_files [32000]*

If sudo -u elasticsearch -s "ulimit -Sn" shows 32000 but you still have a
low limit when you run Elasticsearch, you're probably running it through
another program that doesn't support PAM: a frequent offender is supervisord
.

The only solution I know to this problem is to raise the nofile limit for
the user running supervisord, but this will obviously raise the limit for
all the processes running under supervisord, not an ideal situation.

Consider using the Elasticsearch service
wrapperhttp://github.com/elasticsearch/elasticsearch-servicewrapper
instead.

On 20 November 2012 10:48, Ulli ulli.scheel@gmx.de wrote:

Am Dienstag, 20. November 2012 10:52:52 UTC+1 schrieb Derry O' Sullivan:

Hi Ulli,

You need to set the parameters on the OS as well as within elasticsearch:
Elasticsearch Platform — Find real-time answers at scale | Elastic**
open-files.htmlhttp://www.elasticsearch.org/tutorials/2011/04/06/too-many-open-files.html

I tried this tutorial already but it has no effect (I'm starting
elasticsearch as root):

cat /etc/security/limits.conf

root soft nofile 32000
root hard nofile 32000

ulimit -Sn

1024

--

--