ArrayIndexOutOfBoundsException while indexing


(Matt Weber) #1

Ran across this while indexing today. -65536 looks like it might be an overflow or something but not sure. This is the first attempt at indexing with large ngram settings (min=1, max=8) so that might be related. Any ideas as to where I should focus debugging? It happened 6-8 hours into our loading so it is not easily reproducible.

This is ES 6.1.0, index has 165 shards, and a complex mapping.

[2018-02-20T00:01:25,364][WARN ][o.e.i.c.IndicesClusterStateService] [node] [[index][97]] marking and sending shard failed due to [shard failure, reason [already closed by tragic event on the index writer]]
        java.lang.ArrayIndexOutOfBoundsException: -65536
                at org.apache.lucene.index.TermsHashPerField.writeByte(TermsHashPerField.java:198) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]
                at org.apache.lucene.index.TermsHashPerField.writeVInt(TermsHashPerField.java:224) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]
                at org.apache.lucene.index.FreqProxTermsWriterPerField.writeProx(FreqProxTermsWriterPerField.java:80) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]
                at org.apache.lucene.index.FreqProxTermsWriterPerField.addTerm(FreqProxTermsWriterPerField.java:184) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]
                at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:185) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]
                at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:786) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]
                at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:430) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]
                at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:392) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]
                at org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:280) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]
                at org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:436) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]
                at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1530) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]
                at org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1506) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]
                at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:1002) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:946) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:815) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:732) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:701) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:667) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:548) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequest(TransportShardBulkAction.java:140) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:236) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:123) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:110) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:72) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:1033) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:1011) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:104) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:358) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:298) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:974) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:971) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) ~[elasticsearch-6.1.0.jar:6.1.0]
                at org.elasticsearch.index.shard.IndexShardOperationPermits$PermitAwareThreadedActionListener$1.doRun(IndexShardOperationPermits.java:305) ~[elasticsearch-6.1.0.jar:6.1.0]
                at 

(Val Crettaz) #2

With a minimum ngram of 1 and max of 8, I think you ran into this situation. As a result, ES introduced a new index-level setting called index.max_ngram_diff to prevent the min and max ngram settings to diverge too much.


(Matt Weber) #3

Thanks val, I knew of this and suspected it. I don't think 8 ngram max is that crazy of a number considering the issue that promoted these soft limits was 7k. Ultimately I would like to know what the actual issue is, too many terms is not a sufficient answer. Are we overflowing a position or offset counter? What ultimately leads to this exception being thrown?


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.