Rolling Upgrading from 1.2.1 to 1.3.0 – java.lang.IllegalArgumentException: No enum constant org.apache.lucene.util.Version.4.3.1

Hi,

We're upgrading our staging cluster from 1.2.1 to 1.3.0 one box at a time.
We have stopped Elasticsearch on the first box, removed the Groovy plugin
we were using with 1.2.1 and deployed using chef. The new box reports as
1.3.0 but when it rejoins the cluster (of two boxes, the other box is still
running 1.2.1) the logs fill with:

[2014-07-28 09:40:57,121][WARN ][cluster.action.shard ] [stg-elastic-1]
[development][3] sending failed shard for [development][3],
node[ob-VHwcSR3KWEJcaS8GyVA], [R], s[INITIALIZING], indexUUID [na],
reason [Failed to start shard, message [IllegalArgumentException[No enum
constant org.apache.lucene.util.Version.4.3.1]]]
[2014-07-28 09:40:57,133][WARN ][index.engine.internal ] [stg-elastic-1]
[development][4] failed engine [corrupted preexisting index]
[2014-07-28 09:40:57,134][WARN ][indices.cluster ] [stg-elastic-1]
[development][4] failed to start shard
java.lang.IllegalArgumentException: No enum constant
org.apache.lucene.util.Version.4.3.1
at java.lang.Enum.valueOf(Enum.java:236)
at org.apache.lucene.util.Version.valueOf(Version.java:32)
at org.apache.lucene.util.Version.parseLeniently(Version.java:250)
at
org.elasticsearch.index.store.Store$MetadataSnapshot.buildMetadata(Store.java:451)
at
org.elasticsearch.index.store.Store$MetadataSnapshot.(Store.java:433)
at org.elasticsearch.index.store.Store.getMetadata(Store.java:144)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:724)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:576)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:183)
at
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:444)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

Most of the indexes have both primary and second shards enabled but the
index in question (development) is seen with secondaries going in and out
of the newly upgraded node.

The documentation seems to suggest that it should be possible to do a
rolling upgrade from 1.2.1 and that the boxes should coexist reasonably
happily, but this does not seem to be the case. The cluster doesn't seem to
be going green. Slightly concerned from the words 'corrupted preexisting
index' that some damage might have been done?

We've stopped Elasticsearch on the 1.3.0 box whilst we seek your advice.

Thanks,

Ollie

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/31f56380-f1f1-4dbd-be7c-20498d5159b1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

There was a bug in Lucene which caused problems with Elasticsearch 1.3.0.
You might already know this, but 1.3.1 was released today to fix this issue:

The issue should only affect older versions. Your version is newer, but the
error states "Version.4.3.1". Did you upgrade from 0.90.2 at some point? If
so, do you ever reindex data? You might have very old segments present, so
you should be able to solve the problem by running an
optimize/force-merge-1 in order to upgrade those segments.

Cheers,

Ivan

On Mon, Jul 28, 2014 at 2:46 AM, Ollie oliver.byford@idioplatform.com
wrote:

Hi,

We're upgrading our staging cluster from 1.2.1 to 1.3.0 one box at a time.
We have stopped Elasticsearch on the first box, removed the Groovy plugin
we were using with 1.2.1 and deployed using chef. The new box reports as
1.3.0 but when it rejoins the cluster (of two boxes, the other box is still
running 1.2.1) the logs fill with:

[2014-07-28 09:40:57,121][WARN ][cluster.action.shard ]
[stg-elastic-1] [development][3] sending failed shard for [development][3],
node[ob-VHwcSR3KWEJcaS8GyVA], [R], s[INITIALIZING], indexUUID [na],
reason [Failed to start shard, message [IllegalArgumentException[No enum
constant org.apache.lucene.util.Version.4.3.1]]]
[2014-07-28 09:40:57,133][WARN ][index.engine.internal ]
[stg-elastic-1] [development][4] failed engine [corrupted preexisting index]
[2014-07-28 09:40:57,134][WARN ][indices.cluster ]
[stg-elastic-1] [development][4] failed to start shard
java.lang.IllegalArgumentException: No enum constant
org.apache.lucene.util.Version.4.3.1
at java.lang.Enum.valueOf(Enum.java:236)
at org.apache.lucene.util.Version.valueOf(Version.java:32)
at org.apache.lucene.util.Version.parseLeniently(Version.java:250)
at
org.elasticsearch.index.store.Store$MetadataSnapshot.buildMetadata(Store.java:451)
at
org.elasticsearch.index.store.Store$MetadataSnapshot.(Store.java:433)
at org.elasticsearch.index.store.Store.getMetadata(Store.java:144)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:724)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:576)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:183)
at
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:444)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

Most of the indexes have both primary and second shards enabled but the
index in question (development) is seen with secondaries going in and out
of the newly upgraded node.

The documentation seems to suggest that it should be possible to do a
rolling upgrade from 1.2.1 and that the boxes should coexist reasonably
happily, but this does not seem to be the case. The cluster doesn't seem to
be going green. Slightly concerned from the words 'corrupted preexisting
index' that some damage might have been done?

We've stopped Elasticsearch on the 1.3.0 box whilst we seek your advice.

Thanks,

Ollie

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/31f56380-f1f1-4dbd-be7c-20498d5159b1%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/31f56380-f1f1-4dbd-be7c-20498d5159b1%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQCB5hwcwUh6J2LHc_5O%2Bp7NYPCAFiq_8EQ2agWyT_iboQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Do you happen to know if optimize will create a segment larger than 5 gigs?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/619c435f-8040-4eb1-9528-ba1638d25ee3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

It will depend on your merge settings and your shard size. Not sure why
not, but I do not recall what the default settings are.

--
Ivan

On Mon, Jul 28, 2014 at 8:52 PM, smonasco smonasco@gmail.com wrote:

Do you happen to know if optimize will create a segment larger than 5 gigs?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/619c435f-8040-4eb1-9528-ba1638d25ee3%40googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQCWCcOCQW1if-80_cLWX18%2Bae_CgHnxPO2sus_nft6PFQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.