Whether the heap size is allotted to whole cluster or it is divided between indices?
Suppose we have 1 cluster which has x amount of heap memory. it has 2 indexes (for a example scenario). index 1 is currently doing the bulk activity. index 2 is currently ingesting articles. We get an out of memory error on index 1. so should that same out of memory not happen on index 2 because it uses the same heap memory?
JVM heap will be used for everything happening on that node, OOO applies to the entire process. If one node becomes unavailable following OOO, this applies to any action would do, and shards will be reallocated provided there are replicas available so other nodes would then be able to respond to indexing/searches
If elasticsearch runs out of memory, there are a few things to look out : add more memory if possible, add more nodes to scale horizontally, or reduce the amount of data in your cluster (or review mappings and queries/aggregations to reduce resource usage)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.