Elasticsearch Java Heap clarifications?

Hi,

Questions:

  1. Whether the heap size is allotted to whole cluster or it is divided between indices?
  2. Suppose we have 1 cluster which has x amount of heap memory. it has 2 indexes (for a example scenario). index 1 is currently doing the bulk activity. index 2 is currently ingesting articles. We get an out of memory error on index 1. so should that same out of memory not happen on index 2 because it uses the same heap memory?

Thanks

Heap space is a parameter of the JVM, it applies to the elasticsearch node/instance
The recommendation is to set JVM heap to half the amount of RAM, but always below 32Gb:
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html

JVM heap will be used for everything happening on that node, OOO applies to the entire process. If one node becomes unavailable following OOO, this applies to any action would do, and shards will be reallocated provided there are replicas available so other nodes would then be able to respond to indexing/searches

If elasticsearch runs out of memory, there are a few things to look out : add more memory if possible, add more nodes to scale horizontally, or reduce the amount of data in your cluster (or review mappings and queries/aggregations to reduce resource usage)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.