Hi All,
After upgrading from ES 0.20.6 to 1.3.4 the following messages occurred:
[2014-12-19 10:02:06.714 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#14]
org.elasticsearch.cluster.action.shard [es-node-name] [index-name][3]
sending failed shard for [index-name][3], node[qOTLmb3IQC2COXZh1n9O2w],
[P], s[INITIALIZING], indexUUID [na], reason [Failed to start shard,
message [IndexShardGatewayRecoveryException[[index-name][3] failed to fetch
index version after copying it over]; nested:
CorruptIndexException[[index-name][3] Corrupted index
[corrupted_Ackui00SSBi8YXACZGNDkg] caused by: CorruptIndexException[did not
read all bytes from file: read 112 vs size 113 (resource:
BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]];
]]
[2014-12-19 10:02:08.390 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#20]
org.elasticsearch.indices.cluster [es-node-name] [index-name][3] failed to
start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[index-name][3] failed to fetch index version after copying it over
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:152)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.index.CorruptIndexException: [index-name][3]
Corrupted index [corrupted_Ackui00SSBi8YXACZGNDkg] caused by:
CorruptIndexException[did not read all bytes from file: read 112 vs size
113 (resource:
BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]
at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:353)
at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:338)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:119)
... 4 more
Shard [3] of the index remains unallocated and the cluster remains in a RED
state.
curl -XGET 'http://localhost:48012/_cluster/health?pretty=true'
{
"cluster_name" : "cluster-name",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 5,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 1,
"unassigned_shards" : 1
}
If I do an optimize (curl -XPOST
http://localhost:48012/index-name/_optimize?max_num_segments=1) for the
index before the update, everything is fine. Optimize works just before the
update, if is done after the update the problem remains the same.
Any idea why this problem occurs?
Is there another way to avoid this problem? I want to avoid optimize in
case of large volume of data.
Thank you,
Georgeta
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/74d0af86-c661-4e58-ba2c-d38adde1291c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.