We are running 4 instances of elasticsearch, Our 4 node cluster is made of
4 m1.medium EC2 instances, which has 500GB added hard disk and 3.7G memory.
We have been getting java heap out of memory errors quite regularly, so
this time we decided to take the heap dump and analyze it. But,
unfortunately its tough for us to make anything out of it. So we are
seeking for some help, can anybody make a purposeful implementation of the
following heap dump.
can you reliably reproduce that exception? Is it possible that you are
facetting on an analyzed field (which means that all the terms of this
field is loaded into memory) or something like that? Maybe we can find out
the reason for this and see how to solve it.
Also, it might make sense to mention your elasticsearch and java versions.
We are running 4 instances of elasticsearch, Our 4 node cluster is made of
4 m1.medium EC2 instances, which has 500GB added hard disk and 3.7G memory.
We have been getting java heap out of memory errors quite regularly, so
this time we decided to take the heap dump and analyze it. But,
unfortunately its tough for us to make anything out of it. So we are
seeking for some help, can anybody make a purposeful implementation of the
following heap dump.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.