I have a grails web application that has an embedded ES version 1.3.5 instance in it. It is configured like this:
Settings settings = ImmutableSettings.settingsBuilder() .put("node.http.enabled", false) .put("index.gateway.type", "none") .put("index.number_of_shards", 1) .put("path.data", siIndexLoc) .put("index.number_of_replicas", 0).build(); node = NodeBuilder.nodeBuilder().local(true).settings(settings).node(); client = node.client();
It is embedded with a grails app that has 1 gig of heap.
The use case is indexing a very small # of documents with very frequent updates.
The issue is this: during index time, I run out of heap (apparently) when I do lots of concurrent indexing with the bulk API with this exception:
Exception in thread "elasticsearch[Master Menace][[myindex]: Lucene Merge Thread #1]" org.apache.lucene.index.MergePolicy$MergeException: java.lang.OutOfMemoryError: Java heap space at org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545) at org.elasticsearch.index.merge.scheduler.ConcurrentMergeSchedulerProvider$CustomConcurrentMergeScheduler.handleMergeException(ConcurrentMergeSchedulerProvider.java:134)
I have read the docs about limiting memory but it all seems to be geared towards standalone setups by setting 'ES_HEAP_SIZE'. Is there a way to limit memory usage during indexing while in embedded mode? Is my ES instance setup in an inappropriate manner?