How safe is Java7 update 80 with ES 1.4.x?


I'm planning on upgrading from Java7 u67 to u80, due to the fix for the upcoming leapsecond. Would you consider this a needed/safe move? Are there any good/bad reports about the latest update for Java7?

Thanks in advance.

We haven't heard anything bad that I am aware of.

Hello, I have an update on this one, but it's not happy :slight_smile:

So we just tried shutting down all the nodes (3 in total, 2 with data, 1 just to route searches) and upgrading one of the data nodes to JRE7u80 (this was the only online node, just to be safe). It came up with index corrupted errors (everything was working flawlessly until the upgrade):

{"message":"[elastic2-west02] [index_2015_05_v1][3] sending failed shard for [index_2015_05_v1][3], node[Yx684lX3SC6uoD_yT0d5Tw], [P], s[INITIALIZING], indexUUID [UqUZW7iXTqez4M5nReZH1Q], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[index_2015_05_v1][3] failed to fetch index version after copying it over]; nested: CorruptIndexException[[index_2015_05_v1][3] Preexisting corrupted index [corrupted_UXeWPgF7Qn61i2_qi9gkJw] caused by: CorruptIndexException[Invalid fieldsStream maxPointer (file truncated?): maxPointer=102522365, length=40370176]\norg.apache.lucene.index.CorruptIndexException: Invalid fieldsStream maxPointer (file truncated?): maxPointer=102522365, length=40370176\n\tat org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.(\n\tat org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(\n\tat org.apache.lucene.index.SegmentCoreReaders.(\n\tat org.apache.lucene.index.SegmentReader.(\n\tat org.apache.lucene.index.ReadersAndUpdates.getReader(\n\tat org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(\n\tat\n\tat org.apache.lucene.index.IndexWriter.getReader(\n\tat\n\tat\n\tat org.elasticsearch.index.engine.internal.InternalEngine.buildSearchManager(\n\tat org.elasticsearch.index.engine.internal.InternalEngine.start(\n\tat org.elasticsearch.index.shard.service.InternalIndexShard.postRecovery(\n\tat org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(\n\tat org.elasticsearch.index.gateway.IndexShardGatewayService$\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.util.concurrent.ThreadPoolExecutor$ Source)\n\tat Source)\n]; ]]","@version":"1","@timestamp":"2015-05-31T11:57:31.887Z","type":"elasticsearch","host":"x.x.x.x:56107","path":"cluster.action.shard","priority":"WARN","logger_name":"cluster.action.shard","thread":"elasticsearch[elastic2-west02][generic][T#2]","class":"?","file":"?:?","method":"?"}

This is ES 1.4.4 (a couple of months ago we upgraded from 1.3.2 without issues) with JRE7u67. The node was upgraded to JRE7u80 and started up isolated from the cluster (all the nodes were down for this test), running on Amazon Linux 3.14.42-31.38.amzn1.x86_64

We then shut down the upgraded node and rollbacked the jre upgrade, renamed the data directory (just to have evidence) started the other nodes that were untouched and then the problematic one. Everything went back to JRE7u67, the shards were reallocated successfuly and the cluster is now all green and happy... but, without the latest java update.

Anyone with any thoughts/comments?


EDIT: I've also opened an issue to get more feedback and any directives that can help debug this issue

Just wanted to report that we upgraded (not in a rolling upgrade, but shutting down the cluster) to ES 1.5.2 and Java 7u80 and everything went smoothly. Thanks again for your time and help.