Shard failures after upgrading from 0.18.7 to 0.19.0

Hi,
We have cluster of 3 ES nodes and around 200 million documents (number of
shards 10 and number of replicas 2). We did an upgrade from version 0.18.7
to 0.19.0 across the cluster. Following are the steps we followed:

  1. Stop live data indexing.
  2. Flush index.
  3. Stop ES on all the nodes and upgrade it.
  4. Start ES.

But while recovering old indices, master node is continuously spewing out
errors of the following sort:

[2012-03-07 11:05:51,374][WARN ][cluster.action.shard ] [Scrambler]
sending failed shard for [tweets][6], node[DZ_lKvWFRTuUXMGeitDHIA], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[tweets][6] failed to recover shard];
nested: StringIndexOutOfBoundsException[String index out of range: 0]; ]]
[2012-03-07 11:05:51,374][WARN ][indices.cluster ] [Scrambler]
[tweets][9] failed to start
shardorg.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[tweets][9] failed to recover shard
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:177)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.StringIndexOutOfBoundsException: String index out of
range: 0
at java.lang.String.charAt(String.java:686) at
org.elasticsearch.index.mapper.MapperService.add(MapperService.java:180)
at
org.elasticsearch.index.mapper.MapperService.add(MapperService.java:172)
at
org.elasticsearch.index.mapper.MapperService.documentMapperWithAutoCreate(MapperService.java:298)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:310)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:624)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:196)

Other (non-master) ES nodes are throwing error:

[2012-03-07 11:08:18,026][WARN ][cluster.action.shard ] [Shingen
Harada] received shard failed for [tweets][7],
node[DZ_lKvWFRTuUXMGeitDHIA], [P], s[INITIALIZING], reason [Failed to start
shard, message [IndexShardGatewayRecoveryException[[tweets][7] failed to
recover shard]; nested: StringIndexOutOfBoundsException[String index out of
range: 0]; ]]

Any suggestion what is possibly going wrong here?

Cheers
Nitish

Thats strange…, it means that for some reason, the mapping type value is empty.

First, note, if you want to downgrade, you need to delete all the files on _state except for the backup files, and then rename back the backup files.

Back to the problem, I think that either you did not flush all data, or some data was indexed after the flush, causing the transaction log (flush clears it) with data from 0.18.7 (and reading it causes failures). What you can do, if you are sure you flushed, is simple delete the relevant index/shard transaction log. It exists under data/nodes/0/indices/tweets/9/translog (for an index named tweets and shard number 0).

On Wednesday, March 7, 2012 at 12:10 PM, Nitish Sharma wrote:

Hi,
We have cluster of 3 ES nodes and around 200 million documents (number of shards 10 and number of replicas 2). We did an upgrade from version 0.18.7 to 0.19.0 across the cluster. Following are the steps we followed:

  1. Stop live data indexing.
  2. Flush index.
  3. Stop ES on all the nodes and upgrade it.
  4. Start ES.

But while recovering old indices, master node is continuously spewing out errors of the following sort:

[2012-03-07 11:05:51,374][WARN ][cluster.action.shard ] [Scrambler] sending failed shard for [tweets][6], node[DZ_lKvWFRTuUXMGeitDHIA], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[tweets][6] failed to recover shard]; nested: StringIndexOutOfBoundsException[String index out of range: 0]; ]]
[2012-03-07 11:05:51,374][WARN ][indices.cluster ] [Scrambler] [tweets][9] failed to start shardorg.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [tweets][9] failed to recover shard
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201) at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:177)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.StringIndexOutOfBoundsException: String index out of range: 0
at java.lang.String.charAt(String.java:686) at org.elasticsearch.index.mapper.MapperService.add(MapperService.java:180)
at org.elasticsearch.index.mapper.MapperService.add(MapperService.java:172) at org.elasticsearch.index.mapper.MapperService.documentMapperWithAutoCreate(MapperService.java:298)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:310)
at org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:624)
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:196)

Other (non-master) ES nodes are throwing error:

[2012-03-07 11:08:18,026][WARN ][cluster.action.shard ] [Shingen Harada] received shard failed for [tweets][7], node[DZ_lKvWFRTuUXMGeitDHIA], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[tweets][7] failed to recover shard]; nested: StringIndexOutOfBoundsException[String index out of range: 0]; ]]

Any suggestion what is possibly going wrong here?

Cheers
Nitish

Hi Shay,
Deleting the translogs from all the indices data directory seems to do the
trick. Now data handoffs between nodes are going on.
But, the master node is still throwing these errors:

[2012-03-07 14:57:39,341][WARN ][cluster.action.shard ] [Scream]
sending failed shard for [tweets][0], node[J_kYRBEYSkq8A0nLU85iGg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[tweets][0] failed to recover shard];
nested: StringIndexOutOfBoundsException[String index out of range: 0]; ]]
[2012-03-07 14:57:39,341][WARN ][cluster.action.shard ] [Scream]
received shard failed for [tweets][0], node[J_kYRBEYSkq8A0nLU85iGg], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[tweets][0] failed to recover shard];
nested: StringIndexOutOfBoundsException[String index out of range: 0]; ]]
[2012-03-07 15:07:57,043][WARN ][indices.cluster ] [Scream]
[tweets][0] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[tweets][0] failed to recover shard
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:177)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.StringIndexOutOfBoundsException: String index out of
range: 0
at java.lang.String.charAt(String.java:686)
at org.elasticsearch.index.mapper.MapperService.add(MapperService.java:180)
at org.elasticsearch.index.mapper.MapperService.add(MapperService.java:172)
at
org.elasticsearch.index.mapper.MapperService.documentMapperWithAutoCreate(MapperService.java:298)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:310)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:624)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:196)
... 4 more

Though earlier these errors were rather more frequent and continuous. Now
we get these errors once in a while (like every 10 mins or so). Any other
suspicions?

Cheers
N.
On Wednesday, March 7, 2012 12:42:48 PM UTC+1, kimchy wrote:

Thats strange…, it means that for some reason, the mapping type value is
empty.

First, note, if you want to downgrade, you need to delete all the files on
_state except for the backup files, and then rename back the backup files.

Back to the problem, I think that either you did not flush all data, or
some data was indexed after the flush, causing the transaction log (flush
clears it) with data from 0.18.7 (and reading it causes failures). What you
can do, if you are sure you flushed, is simple delete the relevant
index/shard transaction log. It exists under
data/nodes/0/indices/tweets/9/translog (for an index named tweets and shard
number 0).

On Wednesday, March 7, 2012 at 12:10 PM, Nitish Sharma wrote:

Hi,
We have cluster of 3 ES nodes and around 200 million documents (number of
shards 10 and number of replicas 2). We did an upgrade from version 0.18.7
to 0.19.0 across the cluster. Following are the steps we followed:

  1. Stop live data indexing.
  2. Flush index.
  3. Stop ES on all the nodes and upgrade it.
  4. Start ES.

But while recovering old indices, master node is continuously spewing out
errors of the following sort:

[2012-03-07 11:05:51,374][WARN ][cluster.action.shard ] [Scrambler]
sending failed shard for [tweets][6], node[DZ_lKvWFRTuUXMGeitDHIA], [P],
s[INITIALIZING], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[tweets][6] failed to recover shard];
nested: StringIndexOutOfBoundsException[String index out of range: 0]; ]]
[2012-03-07 11:05:51,374][WARN ][indices.cluster ] [Scrambler]
[tweets][9] failed to start
shardorg.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[tweets][9] failed to recover shard
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:177)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.StringIndexOutOfBoundsException: String index out of
range: 0
at java.lang.String.charAt(String.java:686) at
org.elasticsearch.index.mapper.MapperService.add(MapperService.java:180)
at
org.elasticsearch.index.mapper.MapperService.add(MapperService.java:172)
at
org.elasticsearch.index.mapper.MapperService.documentMapperWithAutoCreate(MapperService.java:298)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:310)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:624)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:196)

Other (non-master) ES nodes are throwing error:

[2012-03-07 11:08:18,026][WARN ][cluster.action.shard ] [Shingen
Harada] received shard failed for [tweets][7],
node[DZ_lKvWFRTuUXMGeitDHIA], [P], s[INITIALIZING], reason [Failed to start
shard, message [IndexShardGatewayRecoveryException[[tweets][7] failed to
recover shard]; nested: StringIndexOutOfBoundsException[String index out of
range: 0]; ]]

Any suggestion what is possibly going wrong here?

Cheers
Nitish

It seems like this shard (tweets index, shard 0) still have a translog to recover from, the exception comes from iterating over the translog and recovering. Try and shutdown the cluster and delete the relevant translogs.

On Wednesday, March 7, 2012 at 4:13 PM, Nitish Sharma wrote:

Hi Shay,
Deleting the translogs from all the indices data directory seems to do the trick. Now data handoffs between nodes are going on.
But, the master node is still throwing these errors:

[2012-03-07 14:57:39,341][WARN ][cluster.action.shard ] [Scream] sending failed shard for [tweets][0], node[J_kYRBEYSkq8A0nLU85iGg], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[tweets][0] failed to recover shard]; nested: StringIndexOutOfBoundsException[String index out of range: 0]; ]]
[2012-03-07 14:57:39,341][WARN ][cluster.action.shard ] [Scream] received shard failed for [tweets][0], node[J_kYRBEYSkq8A0nLU85iGg], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[tweets][0] failed to recover shard]; nested: StringIndexOutOfBoundsException[String index out of range: 0]; ]]
[2012-03-07 15:07:57,043][WARN ][indices.cluster ] [Scream] [tweets][0] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [tweets][0] failed to recover shard
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:177)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.StringIndexOutOfBoundsException: String index out of range: 0
at java.lang.String.charAt(String.java:686)
at org.elasticsearch.index.mapper.MapperService.add(MapperService.java:180)
at org.elasticsearch.index.mapper.MapperService.add(MapperService.java:172)
at org.elasticsearch.index.mapper.MapperService.documentMapperWithAutoCreate(MapperService.java:298)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:310)
at org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:624)
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:196)
... 4 more

Though earlier these errors were rather more frequent and continuous. Now we get these errors once in a while (like every 10 mins or so). Any other suspicions?

Cheers
N.
On Wednesday, March 7, 2012 12:42:48 PM UTC+1, kimchy wrote:

Thats strange…, it means that for some reason, the mapping type value is empty.

First, note, if you want to downgrade, you need to delete all the files on _state except for the backup files, and then rename back the backup files.

Back to the problem, I think that either you did not flush all data, or some data was indexed after the flush, causing the transaction log (flush clears it) with data from 0.18.7 (and reading it causes failures). What you can do, if you are sure you flushed, is simple delete the relevant index/shard transaction log. It exists under data/nodes/0/indices/tweets/9/translog (for an index named tweets and shard number 0).

On Wednesday, March 7, 2012 at 12:10 PM, Nitish Sharma wrote:

Hi,
We have cluster of 3 ES nodes and around 200 million documents (number of shards 10 and number of replicas 2). We did an upgrade from version 0.18.7 to 0.19.0 across the cluster. Following are the steps we followed:

  1. Stop live data indexing.
  2. Flush index.
  3. Stop ES on all the nodes and upgrade it.
  4. Start ES.

But while recovering old indices, master node is continuously spewing out errors of the following sort:

[2012-03-07 11:05:51,374][WARN ][cluster.action.shard ] [Scrambler] sending failed shard for [tweets][6], node[DZ_lKvWFRTuUXMGeitDHIA], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[tweets][6] failed to recover shard]; nested: StringIndexOutOfBoundsException[String index out of range: 0]; ]]
[2012-03-07 11:05:51,374][WARN ][indices.cluster ] [Scrambler] [tweets][9] failed to start shardorg.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [tweets][9] failed to recover shard
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201) at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:177)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.StringIndexOutOfBoundsException: String index out of range: 0
at java.lang.String.charAt(String.java:686) at org.elasticsearch.index.mapper.MapperService.add(MapperService.java:180)
at org.elasticsearch.index.mapper.MapperService.add(MapperService.java:172) at org.elasticsearch.index.mapper.MapperService.documentMapperWithAutoCreate(MapperService.java:298)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:310)
at org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:624)
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:196)

Other (non-master) ES nodes are throwing error:

[2012-03-07 11:08:18,026][WARN ][cluster.action.shard ] [Shingen Harada] received shard failed for [tweets][7], node[DZ_lKvWFRTuUXMGeitDHIA], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[tweets][7] failed to recover shard]; nested: StringIndexOutOfBoundsException[String index out of range: 0]; ]]

Any suggestion what is possibly going wrong here?

Cheers
Nitish

Hi Kimchy,

I am also getting similar problem. Actually we had 2 working environments (production, development) on 0.18.7. We are in the need of having one more testing environment. So i copied "nodes" folder from production to testing es server. Well code was copied from 0.18.7 and i installed 0.19.0 on testing. So i did flush too. It is working partially, But some shards are getting failed. and getting error at some places. Keep in mind, now i upgraded to 0.19.0 for all other environments too with the same db. and old environments are working fine. but new testing is not working with this.

Here is the content from log file.

[2012-03-19 16:12:24,516][DEBUG][action.count ] [Rodstvow] [es_5_relations][0], node[sU28LxXdS5W61zY1oT3ygA], [P], s[STARTED]: Failed to execute [[[es_5_relations]][relations], querySource[{"bool":{"must":[[{"terms":{"guid_2":[null]}},{"terms":{"relation":["fan_of"]}}]]}}]]
org.elasticsearch.index.query.QueryParsingException: [es_5_relations] Failed to parse
at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:197)
at org.elasticsearch.index.shard.service.InternalIndexShard.count(InternalIndexShard.java:409)
at org.elasticsearch.action.count.TransportCountAction.shardOperation(TransportCountAction.java:134)
at org.elasticsearch.action.count.TransportCountAction.shardOperation(TransportCountAction.java:50)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:234)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:211)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.NumberFormatException: For input string: "null"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:438)
at java.lang.Long.parseLong(Long.java:478)
at org.elasticsearch.index.mapper.core.LongFieldMapper.fieldQuery(LongFieldMapper.java:165)
at org.elasticsearch.index.query.TermsQueryParser.parse(TermsQueryParser.java:112)
at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:192)
at org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:81)
at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:192)
at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:243)
at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:193)
... 9 more
[2012-03-19 16:12:24,518][DEBUG][action.count ] [Rodstvow] [es_5_relations][1], node[sU28LxXdS5W61zY1oT3ygA], [P], s[STARTED]: Failed to execute [[[es_5_relations]][relations], querySource[{"bool":{"must":[[{"terms":{"guid_2":[null]}},{"terms":{"relation":["fan_of"]}}]]}}]]
org.elasticsearch.index.query.QueryParsingException: [es_5_relations] Failed to parse
at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:197)
at org.elasticsearch.index.shard.service.InternalIndexShard.count(InternalIndexShard.java:409)
at org.elasticsearch.action.count.TransportCountAction.shardOperation(TransportCountAction.java:134)
at org.elasticsearch.action.count.TransportCountAction.shardOperation(TransportCountAction.java:50)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:234)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:211)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.NumberFormatException: For input string: "null"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:438)
at java.lang.Long.parseLong(Long.java:478)
at org.elasticsearch.index.mapper.core.LongFieldMapper.fieldQuery(LongFieldMapper.java:165)
at org.elasticsearch.index.query.TermsQueryParser.parse(TermsQueryParser.java:112)

Seems like this failure comes from parsing the search request, not related
to "data" upgrade. It seems like you pass null value as a term to a numeric
field terms query?

On Mon, Mar 19, 2012 at 7:55 PM, Terry tarun@izap.in wrote:

Hi Kimchy,

I am also getting similar problem. Actually we had 2 working environments
(production, development) on 0.18.7. We are in the need of having one more
testing environment. So i copied "nodes" folder from production to testing
es server. Well code was copied from 0.18.7 and i installed 0.19.0 on
testing. So i did flush too. It is working partially, But some shards are
getting failed. and getting error at some places. Keep in mind, now i
upgraded to 0.19.0 for all other environments too with the same db. and old
environments are working fine. but new testing is not working with this.

Here is the content from log file.

[2012-03-19 16:12:24,516][DEBUG][action.count ] [Rodstvow]
[es_5_relations][0], node[sU28LxXdS5W61zY1oT3ygA], [P], s[STARTED]: Failed
to execute [[[es_5_relations]][relations],

querySource[{"bool":{"must":[[{"terms":{"guid_2":[null]}},{"terms":{"relation":["fan_of"]}}]]}}]]
org.elasticsearch.index.query.QueryParsingException: [es_5_relations]
Failed
to parse
at

org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:197)
at

org.elasticsearch.index.shard.service.InternalIndexShard.count(InternalIndexShard.java:409)
at

org.elasticsearch.action.count.TransportCountAction.shardOperation(TransportCountAction.java:134)
at

org.elasticsearch.action.count.TransportCountAction.shardOperation(TransportCountAction.java:50)
at

org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:234)
at

org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:211)
at

org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:187)
at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.NumberFormatException: For input string: "null"
at

java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:438)
at java.lang.Long.parseLong(Long.java:478)
at

org.elasticsearch.index.mapper.core.LongFieldMapper.fieldQuery(LongFieldMapper.java:165)
at

org.elasticsearch.index.query.TermsQueryParser.parse(TermsQueryParser.java:112)
at

org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:192)
at

org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:81)
at

org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:192)
at

org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:243)
at

org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:193)
... 9 more
[2012-03-19 16:12:24,518][DEBUG][action.count ] [Rodstvow]
[es_5_relations][1], node[sU28LxXdS5W61zY1oT3ygA], [P], s[STARTED]: Failed
to execute [[[es_5_relations]][relations],

querySource[{"bool":{"must":[[{"terms":{"guid_2":[null]}},{"terms":{"relation":["fan_of"]}}]]}}]]
org.elasticsearch.index.query.QueryParsingException: [es_5_relations]
Failed
to parse
at

org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:197)
at

org.elasticsearch.index.shard.service.InternalIndexShard.count(InternalIndexShard.java:409)
at

org.elasticsearch.action.count.TransportCountAction.shardOperation(TransportCountAction.java:134)
at

org.elasticsearch.action.count.TransportCountAction.shardOperation(TransportCountAction.java:50)
at

org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:234)
at

org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:211)
at

org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:187)
at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.NumberFormatException: For input string: "null"
at

java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:438)
at java.lang.Long.parseLong(Long.java:478)
at

org.elasticsearch.index.mapper.core.LongFieldMapper.fieldQuery(LongFieldMapper.java:165)
at

org.elasticsearch.index.query.TermsQueryParser.parse(TermsQueryParser.java:112)

--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/Shard-failures-after-upgrading-from-0-18-7-to-0-19-0-tp3806318p3839796.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.