Elasticsearch 2.1.0版本 buck index过程中出现异常

Continuing the discussion from About the 中文提问与讨论 category:
版本信息:
elasticsearch 2.1.0
启动内存 20g
JDK: 1.8.0_66
场景:
采用BulkProcessor插入数据,buckSize = 5M
(之前版本1.7.X,一直运行良好)
异常信息如下:

[2016-01-08 17:45:19,733][DEBUG][action.bulk ] [mybox] [mgindex0][3] failed to execute bulk item (index) index {[mgindex0][95090eb1-9948-4b21-868a-fc3389b34b6a][95090eb1-1f78-467e-a600-1ce540f27ec0], source[{"ea312e4e-48b8-4c5a-87c6-59d7fe0d9970":"买家","808ff715-f215-4300-bc47-6d29c53a1945|":70589.0,"e8238304-703f-4d89-b543-af994886393f|":"时尚雪地靴","381cbca0-71ba-434e-b624-93d2a732f427":25062702,"ea312e4e-48b8-4c5a-87c6-59d7fe0d9970|":"买家","4c2bc039-0ace-4713-a988-cafd05eff5b9|":"女鞋","label":"paipai","createdon":1452243612892,"4c2bc039-0ace-4713-a988-cafd05eff5b9":"女鞋","createdby":1,"datasource":"bf0bef91-88ea-4c26-acbf-ca1f38976aef","808ff715-f215-4300-bc47-6d29c53a1945":70589.0,"381cbca0-71ba-434e-b624-93d2a732f427|":25062702,"e8238304-703f-4d89-b543-af994886393f":"时尚雪地靴"}]}
java.lang.ArrayIndexOutOfBoundsException: -2097153
at org.apache.lucene.util.BytesRefHash.rehash(BytesRefHash.java:419)
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:323)
at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:150)
at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:661)
at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:344)
at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:300)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:234)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:450)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1475)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1254)
at org.elasticsearch.index.engine.InternalEngine.innerIndex(InternalEngine.java:539)
at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:468)
at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:571)
at org.elasticsearch.index.engine.Engine$Index.execute(Engine.java:836)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:338)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:131)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-01-08 17:45:19,789][DEBUG][index.translog ] [mybox] [mgindex0][3] translog closed
[2016-01-08 17:45:19,789][DEBUG][index.engine ] [mybox] [mgindex0][3] engine closed [engine failed on: [already closed by tragic event]]

看报错信息是在lucene里边,不知道有什么解决方法?

请问你的这个索引是基于2.1.0新建的空白索引还是从1.7.0升级而来,想了解一下你的lucene的版本?
另外elasticsearch2.1.1已经fix了一些bug,能升级到最新版本试试么?

是新建的空白索引。
在elasticsearch讨论区我开了一个topic,正在跟踪中。
可能是在较大压力下lucence出现的bug。