ES upgrade 0.20.6 to 1.3.4 -> CorruptIndexException

Hi All,

After upgrading from ES 0.20.6 to 1.3.4 the following messages occurred:

[2014-12-19 10:02:06.714 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#14]
org.elasticsearch.cluster.action.shard [es-node-name] [index-name][3]
sending failed shard for [index-name][3], node[qOTLmb3IQC2COXZh1n9O2w],
[P], s[INITIALIZING], indexUUID [na], reason [Failed to start shard,
message [IndexShardGatewayRecoveryException[[index-name][3] failed to fetch
index version after copying it over]; nested:
CorruptIndexException[[index-name][3] Corrupted index
[corrupted_Ackui00SSBi8YXACZGNDkg] caused by: CorruptIndexException[did not
read all bytes from file: read 112 vs size 113 (resource:
BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]];
]]

[2014-12-19 10:02:08.390 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#20]
org.elasticsearch.indices.cluster [es-node-name] [index-name][3] failed to
start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[index-name][3] failed to fetch index version after copying it over
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:152)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.index.CorruptIndexException: [index-name][3]
Corrupted index [corrupted_Ackui00SSBi8YXACZGNDkg] caused by:
CorruptIndexException[did not read all bytes from file: read 112 vs size
113 (resource:
BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]
at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:353)
at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:338)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:119)
... 4 more

Shard [3] of the index remains unallocated and the cluster remains in a RED
state.

curl -XGET 'http://localhost:48012/_cluster/health?pretty=true'
{
"cluster_name" : "cluster-name",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 5,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 1,
"unassigned_shards" : 1
}

If I do an optimize (curl -XPOST
http://localhost:48012/index-name/_optimize?max_num_segments=1) for the
index before the update, everything is fine. Optimize works just before the
update, if is done after the update the problem remains the same.

Any idea why this problem occurs?
Is there another way to avoid this problem? I want to avoid optimize in
case of large volume of data.

Thank you,
Georgeta

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/74d0af86-c661-4e58-ba2c-d38adde1291c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Any ideas?

On Friday, December 19, 2014 11:40:37 AM UTC+1, Georgeta Boanea wrote:

Hi All,

After upgrading from ES 0.20.6 to 1.3.4 the following messages occurred:

[2014-12-19 10:02:06.714 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#14]
org.elasticsearch.cluster.action.shard [es-node-name] [index-name][3]
sending failed shard for [index-name][3], node[qOTLmb3IQC2COXZh1n9O2w],
[P], s[INITIALIZING], indexUUID [na], reason [Failed to start shard,
message [IndexShardGatewayRecoveryException[[index-name][3] failed to fetch
index version after copying it over]; nested:
CorruptIndexException[[index-name][3] Corrupted index
[corrupted_Ackui00SSBi8YXACZGNDkg] caused by: CorruptIndexException[did not
read all bytes from file: read 112 vs size 113 (resource:
BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]];
]]

[2014-12-19 10:02:08.390 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#20]
org.elasticsearch.indices.cluster [es-node-name] [index-name][3] failed to
start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[index-name][3] failed to fetch index version after copying it over
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:152)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.index.CorruptIndexException: [index-name][3]
Corrupted index [corrupted_Ackui00SSBi8YXACZGNDkg] caused by:
CorruptIndexException[did not read all bytes from file: read 112 vs size
113 (resource:
BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]
at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:353)
at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:338)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:119)
... 4 more

Shard [3] of the index remains unallocated and the cluster remains in a
RED state.

curl -XGET 'http://localhost:48012/_cluster/health?pretty=true'
{
"cluster_name" : "cluster-name",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 5,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 1,
"unassigned_shards" : 1
}

If I do an optimize (curl -XPOST
http://localhost:48012/index-name/_optimize?max_num_segments=1) for the
index before the update, everything is fine. Optimize works just before the
update, if is done after the update the problem remains the same.

Any idea why this problem occurs?
Is there another way to avoid this problem? I want to avoid optimize in
case of large volume of data.

Thank you,
Georgeta

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9b5c6d6c-e8b5-4818-98d1-0ca64f289c5f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

This bug occurs because you are upgrading to an old version of
elasticsearch (1.3.4). Try the latest version where the bug is fixed:
https://issues.apache.org/jira/browse/LUCENE-5975

On Fri, Dec 19, 2014 at 5:40 AM, Georgeta Boanea gio682@gmail.com wrote:

Hi All,

After upgrading from ES 0.20.6 to 1.3.4 the following messages occurred:

[2014-12-19 10:02:06.714 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#14]
org.elasticsearch.cluster.action.shard [es-node-name] [index-name][3]
sending failed shard for [index-name][3], node[qOTLmb3IQC2COXZh1n9O2w], [P],
s[INITIALIZING], indexUUID [na], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[index-name][3] failed to fetch index
version after copying it over]; nested:
CorruptIndexException[[index-name][3] Corrupted index
[corrupted_Ackui00SSBi8YXACZGNDkg] caused by: CorruptIndexException[did not
read all bytes from file: read 112 vs size 113 (resource:
BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]];
]]

[2014-12-19 10:02:08.390 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#20] org.elasticsearch.indices.cluster
[es-node-name] [index-name][3] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[index-name][3] failed to fetch index version after copying it over
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:152)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.index.CorruptIndexException: [index-name][3]
Corrupted index [corrupted_Ackui00SSBi8YXACZGNDkg] caused by:
CorruptIndexException[did not read all bytes from file: read 112 vs size 113
(resource:
BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]
at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:353)
at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:338)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:119)
... 4 more

Shard [3] of the index remains unallocated and the cluster remains in a RED
state.

curl -XGET 'http://localhost:48012/_cluster/health?pretty=true'
{
"cluster_name" : "cluster-name",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 5,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 1,
"unassigned_shards" : 1
}

If I do an optimize (curl -XPOST
http://localhost:48012/index-name/_optimize?max_num_segments=1) for the
index before the update, everything is fine. Optimize works just before the
update, if is done after the update the problem remains the same.

Any idea why this problem occurs?
Is there another way to avoid this problem? I want to avoid optimize in case
of large volume of data.

Thank you,
Georgeta

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/74d0af86-c661-4e58-ba2c-d38adde1291c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMUKNZW3Kc-8smWQjn1VRrk2yhgdiA33EctWUiXEOkxg46BjiQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

The Lucene bug is referring to 3.0-3.3 versions, Elasticsearch 0.20.6 is
using Lucene 3.6, is it the same bug?

On Tuesday, December 30, 2014 2:08:48 PM UTC+1, Robert Muir wrote:

This bug occurs because you are upgrading to an old version of
elasticsearch (1.3.4). Try the latest version where the bug is fixed:
[LUCENE-5975] Lucene can't read 3.0-3.3 deleted documents - ASF JIRA

On Fri, Dec 19, 2014 at 5:40 AM, Georgeta Boanea <gio...@gmail.com
<javascript:>> wrote:

Hi All,

After upgrading from ES 0.20.6 to 1.3.4 the following messages occurred:

[2014-12-19 10:02:06.714 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#14]
org.elasticsearch.cluster.action.shard [es-node-name] [index-name][3]
sending failed shard for [index-name][3], node[qOTLmb3IQC2COXZh1n9O2w],
[P],
s[INITIALIZING], indexUUID [na], reason [Failed to start shard,
message
[IndexShardGatewayRecoveryException[[index-name][3] failed to fetch
index
version after copying it over]; nested:
CorruptIndexException[[index-name][3] Corrupted index
[corrupted_Ackui00SSBi8YXACZGNDkg] caused by: CorruptIndexException[did
not
read all bytes from file: read 112 vs size 113 (resource:

BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]];

]]

[2014-12-19 10:02:08.390 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#20]
org.elasticsearch.indices.cluster
[es-node-name] [index-name][3] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[index-name][3] failed to fetch index version after copying it over
at

org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:152)

at

org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)

at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.index.CorruptIndexException:
[index-name][3]
Corrupted index [corrupted_Ackui00SSBi8YXACZGNDkg] caused by:
CorruptIndexException[did not read all bytes from file: read 112 vs size
113
(resource:

BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]

at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:353)
at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:338)
at

org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:119)

... 4 more

Shard [3] of the index remains unallocated and the cluster remains in a
RED
state.

curl -XGET 'http://localhost:48012/_cluster/health?pretty=true'
{
"cluster_name" : "cluster-name",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 5,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 1,
"unassigned_shards" : 1
}

If I do an optimize (curl -XPOST
http://localhost:48012/index-name/_optimize?max_num_segments=1) for the
index before the update, everything is fine. Optimize works just before
the
update, if is done after the update the problem remains the same.

Any idea why this problem occurs?
Is there another way to avoid this problem? I want to avoid optimize in
case
of large volume of data.

Thank you,
Georgeta

--
You received this message because you are subscribed to the Google
Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/74d0af86-c661-4e58-ba2c-d38adde1291c%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/39216d8f-da8e-4793-abcc-dd004586d45f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Yes. again, use the latest version (1.4.x). its very simple.

On Tue, Dec 30, 2014 at 8:53 AM, Georgeta Boanea gio682@gmail.com wrote:

The Lucene bug is referring to 3.0-3.3 versions, Elasticsearch 0.20.6 is
using Lucene 3.6, is it the same bug?

On Tuesday, December 30, 2014 2:08:48 PM UTC+1, Robert Muir wrote:

This bug occurs because you are upgrading to an old version of
elasticsearch (1.3.4). Try the latest version where the bug is fixed:
[LUCENE-5975] Lucene can't read 3.0-3.3 deleted documents - ASF JIRA

On Fri, Dec 19, 2014 at 5:40 AM, Georgeta Boanea gio...@gmail.com wrote:

Hi All,

After upgrading from ES 0.20.6 to 1.3.4 the following messages occurred:

[2014-12-19 10:02:06.714 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#14]
org.elasticsearch.cluster.action.shard [es-node-name] [index-name][3]
sending failed shard for [index-name][3], node[qOTLmb3IQC2COXZh1n9O2w],
[P],
s[INITIALIZING], indexUUID [na], reason [Failed to start shard,
message
[IndexShardGatewayRecoveryException[[index-name][3] failed to fetch
index
version after copying it over]; nested:
CorruptIndexException[[index-name][3] Corrupted index
[corrupted_Ackui00SSBi8YXACZGNDkg] caused by: CorruptIndexException[did
not
read all bytes from file: read 112 vs size 113 (resource:

BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]];
]]

[2014-12-19 10:02:08.390 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#20]
org.elasticsearch.indices.cluster
[es-node-name] [index-name][3] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[index-name][3] failed to fetch index version after copying it over
at

org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:152)
at

org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)
at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.index.CorruptIndexException:
[index-name][3]
Corrupted index [corrupted_Ackui00SSBi8YXACZGNDkg] caused by:
CorruptIndexException[did not read all bytes from file: read 112 vs size
113
(resource:

BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]
at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:353)
at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:338)
at

org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:119)
... 4 more

Shard [3] of the index remains unallocated and the cluster remains in a
RED
state.

curl -XGET 'http://localhost:48012/_cluster/health?pretty=true'
{
"cluster_name" : "cluster-name",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 5,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 1,
"unassigned_shards" : 1
}

If I do an optimize (curl -XPOST
http://localhost:48012/index-name/_optimize?max_num_segments=1) for the
index before the update, everything is fine. Optimize works just before
the
update, if is done after the update the problem remains the same.

Any idea why this problem occurs?
Is there another way to avoid this problem? I want to avoid optimize in
case
of large volume of data.

Thank you,
Georgeta

--
You received this message because you are subscribed to the Google
Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an
email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/74d0af86-c661-4e58-ba2c-d38adde1291c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/39216d8f-da8e-4793-abcc-dd004586d45f%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAOdYfZUep4oNoWekEeny9puxFs5RfV1Z-E%2BL7Ob%2BZeb-gtv%3DaQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Thank you :slight_smile:

On Tuesday, December 30, 2014 3:08:51 PM UTC+1, rcmuir wrote:

Yes. again, use the latest version (1.4.x). its very simple.

On Tue, Dec 30, 2014 at 8:53 AM, Georgeta Boanea <gio...@gmail.com
<javascript:>> wrote:

The Lucene bug is referring to 3.0-3.3 versions, Elasticsearch 0.20.6 is
using Lucene 3.6, is it the same bug?

On Tuesday, December 30, 2014 2:08:48 PM UTC+1, Robert Muir wrote:

This bug occurs because you are upgrading to an old version of
elasticsearch (1.3.4). Try the latest version where the bug is fixed:
[LUCENE-5975] Lucene can't read 3.0-3.3 deleted documents - ASF JIRA

On Fri, Dec 19, 2014 at 5:40 AM, Georgeta Boanea gio...@gmail.com
wrote:

Hi All,

After upgrading from ES 0.20.6 to 1.3.4 the following messages
occurred:

[2014-12-19 10:02:06.714 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#14]
org.elasticsearch.cluster.action.shard [es-node-name]
[index-name][3]
sending failed shard for [index-name][3],
node[qOTLmb3IQC2COXZh1n9O2w],
[P],
s[INITIALIZING], indexUUID [na], reason [Failed to start shard,
message
[IndexShardGatewayRecoveryException[[index-name][3] failed to fetch
index
version after copying it over]; nested:
CorruptIndexException[[index-name][3] Corrupted index
[corrupted_Ackui00SSBi8YXACZGNDkg] caused by:
CorruptIndexException[did
not
read all bytes from file: read 112 vs size 113 (resource:

BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]];

]]

[2014-12-19 10:02:08.390 GMT] WARN ||||||
elasticsearch[es-node-name][generic][T#20]
org.elasticsearch.indices.cluster
[es-node-name] [index-name][3] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[index-name][3] failed to fetch index version after copying it over
at

org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:152)

at

org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)

at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.index.CorruptIndexException:
[index-name][3]
Corrupted index [corrupted_Ackui00SSBi8YXACZGNDkg] caused by:
CorruptIndexException[did not read all bytes from file: read 112 vs
size
113
(resource:

BufferedChecksumIndexInput(NIOFSIndexInput(path="path/3/index/_uzm_2.del")))]

at
org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:353)
at
org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:338)
at

org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:119)

... 4 more

Shard [3] of the index remains unallocated and the cluster remains in
a
RED
state.

curl -XGET 'http://localhost:48012/_cluster/health?pretty=true'
{
"cluster_name" : "cluster-name",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 5,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 1,
"unassigned_shards" : 1
}

If I do an optimize (curl -XPOST
http://localhost:48012/index-name/_optimize?max_num_segments=1) for
the
index before the update, everything is fine. Optimize works just
before
the
update, if is done after the update the problem remains the same.

Any idea why this problem occurs?
Is there another way to avoid this problem? I want to avoid optimize
in
case
of large volume of data.

Thank you,
Georgeta

--
You received this message because you are subscribed to the Google
Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send
an
email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/74d0af86-c661-4e58-ba2c-d38adde1291c%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/39216d8f-da8e-4793-abcc-dd004586d45f%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/2c624558-d685-40fd-b373-0dc1c08918bb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.