Elastuc search with Informatica MDM failing with out of Java out of memroy error

Can you please help me with the issue

[algsascs3655008] [4d5354312e375053-organization][0] already closed by tragic event on the index writer
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3744) ~[?:?]
at org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:285) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.store.GrowableByteArrayDataOutput.writeBytes(GrowableByteArrayDataOutput.java:63) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.store.GrowableByteArrayDataOutput.writeString(GrowableByteArrayDataOutput.java:82) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.writeField(CompressingStoredFieldsWriter.java:298) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.StoredFieldsConsumer.writeField(StoredFieldsConsumer.java:55) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:451) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:392) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:281) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]

This is almost impossible to answer without knowing more information. The only information that can be extracted from this snippet is, that the heap was too small. However without knowing how big the heap is, what kind of workload, and indexes the cluster has, it's next to impossible to get any help. Also the Elasticsearch version in use would help a lot. Just adding some more context will help people to take a closer look.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.