Hi,
I am having an issue with one of my shards crashing and restarting the node
does not fix the issue. I have a machine dedicated to elasticsearch that
has 16GB of RAM with a 8GB heap size. We are running only one node with 2
shards and no replicas. The cluster stats list that we have 741,189
documents and 1.2GB of data, we had 2.5GB before the shard crashed. When I
restart the server I get the following error.
[2013-10-11 16:16:18,729][WARN ][common.jna ] Unknown
mlockall error 0
[2013-10-11 16:16:18,855][INFO ][node ] [es01]
version[0.90.3], pid[1369], build[5c38d60/2013-08-06T13:18:31Z]
[2013-10-11 16:16:18,855][INFO ][node ] [es01]
initializing ...
[2013-10-11 16:16:18,921][INFO ][plugins ] [es01] loaded
[jetty], sites [inquisitor, HQ]
[2013-10-11 16:16:21,556][INFO ][node ] [es01]
initialized
[2013-10-11 16:16:21,556][INFO ][node ] [es01] starting
...
[2013-10-11 16:16:21,648][INFO ][transport ] [es01]
bound_address {inet[/67.225.227.57:9300]}, publish_address
{inet[/67.225.227.57:9300]}
[2013-10-11 16:16:24,681][INFO ][cluster.service ] [es01]
new_master [es01][Cbmd-BPUSzyYufGbQgWJvw][inet[/67.225.227.57:9300]],
reason: zen-disco-join (elected_as_master)
[2013-10-11 16:16:24,708][INFO ][discovery ] [es01]
backstitch/Cbmd-BPUSzyYufGbQgWJvw
[2013-10-11 16:16:25,404][INFO ][org.eclipse.jetty.server.Server] [es01]
jetty-8.1.4.v20120524
[2013-10-11 16:16:25,510][INFO
][org.eclipse.jetty.server.AbstractConnector] [es01] Started
SelectChannelConnector@0.0.0.0:9200
[2013-10-11 16:16:25,510][INFO ][http ] [es01]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address
{inet[/67.225.227.57:9200]}
[2013-10-11 16:16:25,511][INFO ][node ] [es01] started
[2013-10-11 16:16:26,126][INFO ][gateway ] [es01]
recovered [1] indices into cluster_state
[2013-10-11 16:16:45,392][WARN ][index.merge.scheduler ] [es01]
[results][0] failed to merge
java.lang.ArrayIndexOutOfBoundsException: -127
at org.apache.lucene.codecs.lucene41.ForUtil.skipBlock(ForUtil.java:219)
at
org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsAndPositionsEnum.skipPositions(Lucene41PostingsReader.java:958)
at
org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsAndPositionsEnum.nextPosition(Lucene41PostingsReader.java:988)
at
org.apache.lucene.codecs.MappingMultiDocsAndPositionsEnum.nextPosition(MappingMultiDocsAndPositionsEnum.java:120)
at
org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:118)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:164)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2013-10-11 16:16:45,393][WARN ][index.engine.robin ] [es01]
[results][0] failed engine
org.apache.lucene.index.MergePolicy$MergeException:
java.lang.ArrayIndexOutOfBoundsException: -127
at
org.elasticsearch.index.merge.scheduler.ConcurrentMergeSchedulerProvider$CustomConcurrentMergeScheduler.handleMergeException(ConcurrentMergeSchedulerProvider.java:99)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.ArrayIndexOutOfBoundsException: -127
at org.apache.lucene.codecs.lucene41.ForUtil.skipBlock(ForUtil.java:219)
at
org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsAndPositionsEnum.skipPositions(Lucene41PostingsReader.java:958)
at
org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsAndPositionsEnum.nextPosition(Lucene41PostingsReader.java:988)
at
org.apache.lucene.codecs.MappingMultiDocsAndPositionsEnum.nextPosition(MappingMultiDocsAndPositionsEnum.java:120)
at
org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:118)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:164)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2013-10-11 16:16:45,397][WARN ][cluster.action.shard ] [es01] sending
failed shard for [results][0], node[Cbmd-BPUSzyYufGbQgWJvw], [P],
s[STARTED], reason [engine failure, message
[MergeException[java.lang.ArrayIndexOutOfBoundsException: -127]; nested:
ArrayIndexOutOfBoundsException[-127]; ]]
[2013-10-11 16:16:45,397][WARN ][cluster.action.shard ] [es01] received
shard failed for [results][0], node[Cbmd-BPUSzyYufGbQgWJvw], [P],
s[STARTED], reason [engine failure, message
[MergeException[java.lang.ArrayIndexOutOfBoundsException: -127]; nested:
ArrayIndexOutOfBoundsException[-127]; ]]
I am hitting the server pretty hard with inserting new documents and
updating the older ones. All of our mappings have ttl no more than 1 week,
if a document is inserted that the id is already in use we ignore it.
If there is any other information I can provide I will be more then willing.
Thank you,
Stefanie
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.