My ES instance is having some memory issues ... mainly this error :
org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large
Now I tried to add heap to my instance (T2.Large) in jvm.options (using Elasticsearch5):
-Xms8g -Xmx8g
But my ES still fails ... when I want to start my ES service on the instance :
`There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 10562961408 bytes for committing reserved memory.`
but this is a t2.large ... shouldn't have such problem?
I also see in my kibana dashboard: Heap Total (MB) 65.23
But I just increased the heap size in jvm.options? What else is the problem?
according to AWS help, t2.large instances have just 8GB of main memory. So when the JVM tries to allocate that amount of memory, it will fail. Also you need plenty of memory headroom for the OS page cache (for Lucene). So, if you want to allocate 8GB of RAM for Elasticsearch, you need at least 16GB of RAM (ideally a bit more).
Also, T2 instances are somewhat unusual for Elasticsearch on AWS (unless you just want to play around with it).
Finally, circuit breakers are safeguards in Elasticsearch that rather abort an operation than letting a node go out of memory. So you should watch out to size your bulks correctly and also watch out for memory intensive queries (e.g. aggregations).
great that it's fixed now (although it's interesting given these symptoms).
To be honest, I never performed any tests on I2 instances. We usually use I2 instances to have enough IOPS for indexing. Also, I'd expect quite some latency spikes in searches when CPU throttling kicks in after you've spent your credits. This was really more of a hint for you but if the T2 instances work for your use case then that's fine.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.