I use BulkProcessor for indexing, bulksize = 5M. It works very well with ES 1.7.3.
but when upgrade to ES 2.1.0, throw exceptions as below:
[2016-01-08 17:45:19,733][DEBUG][action.bulk ] [mybox] [mgindex0][3] failed to execute bulk item (index) index {[mgindex0][95090eb1-9948-4b21-868a-fc3389b34b6a][95090eb1-1f78-467e-a600-1ce540f27ec0], source[{"ea312e4e-48b8-4c5a-87c6-59d7fe0d9970":"买家","808ff715-f215-4300-bc47-6d29c53a1945|":70589.0,"e8238304-703f-4d89-b543-af994886393f|":"时尚雪地靴","381cbca0-71ba-434e-b624-93d2a732f427":25062702,"ea312e4e-48b8-4c5a-87c6-59d7fe0d9970|":"买家","4c2bc039-0ace-4713-a988-cafd05eff5b9|":"女鞋","label":"paipai","createdon":1452243612892,"4c2bc039-0ace-4713-a988-cafd05eff5b9":"女鞋","createdby":1,"datasource":"bf0bef91-88ea-4c26-acbf-ca1f38976aef","808ff715-f215-4300-bc47-6d29c53a1945":70589.0,"381cbca0-71ba-434e-b624-93d2a732f427|":25062702,"e8238304-703f-4d89-b543-af994886393f":"时尚雪地靴"}]}
java.lang.ArrayIndexOutOfBoundsException: -2097153
at org.apache.lucene.util.BytesRefHash.rehash(BytesRefHash.java:419)
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:323)
at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:150)
at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:661)
at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:344)
at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:300)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:234)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:450)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1475)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1254)
at org.elasticsearch.index.engine.InternalEngine.innerIndex(InternalEngine.java:539)
at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:468)
at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:571)
at org.elasticsearch.index.engine.Engine$Index.execute(Engine.java:836)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:338)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:131)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-01-08 17:45:19,789][DEBUG][index.translog ] [mybox] [mgindex0][3] translog closed
[2016-01-08 17:45:19,789][DEBUG][index.engine ] [mybox] [mgindex0][3] engine closed [engine failed on: [already closed by tragic event]]
Is there something wrong?
Thanks for your advice.