My cluster has 4 machines, every node has 8 CPUS, 32GB memory, and has thousands of indices and shards. Some nodes were crashed with out of memory error during query in recent days. I found the segments memory uses about 14gb of the heap size 16gb, and never be released. If I do more query, nodes will crashed with OOM error, so what can I do to avoid this error ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.