Suggestions on how to limit ES in embedded mode on limited memory

I have a grails web application that has an embedded ES version 1.3.5 instance in it. It is configured like this:

        Settings settings = ImmutableSettings.settingsBuilder()
                .put("node.http.enabled", false)
                .put("index.gateway.type", "none")
                .put("index.number_of_shards", 1)
                .put("path.data", siIndexLoc)
                .put("index.number_of_replicas", 0).build();
        node = NodeBuilder.nodeBuilder().local(true).settings(settings).node();
        client = node.client();

It is embedded with a grails app that has 1 gig of heap.
The use case is indexing a very small # of documents with very frequent updates.

The issue is this: during index time, I run out of heap (apparently) when I do lots of concurrent indexing with the bulk API with this exception:

Exception in thread "elasticsearch[Master Menace][[myindex][0]: Lucene Merge Thread #1]" org.apache.lucene.index.MergePolicy$MergeException: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at org.elasticsearch.index.merge.scheduler.ConcurrentMergeSchedulerProvider$CustomConcurrentMergeScheduler.handleMergeException(ConcurrentMergeSchedulerProvider.java:134)

I have read the docs about limiting memory but it all seems to be geared towards standalone setups by setting 'ES_HEAP_SIZE'. Is there a way to limit memory usage during indexing while in embedded mode? Is my ES instance setup in an inappropriate manner?

How much data are you indexing into it?

Here is a paste of localhost:9200/_stats

http://pastebin.com/8E0Dj2wf

So not much.
How much heap does ES have?

The JVM with grails and ES in it had 512m, it would die indexing without much effort, I bumped it up to 1024m and it is much harder to get it to run out, but I can still make it happen without a lot of effort. I'd l like for there to be some sort of safeguard against that.