Good afternoon.
There are two nodes of 32G each. 16G (50%) is
allocated for elasticsearch. There are 1000 indices of different sizes on these two nodes. Some indices are empty, some are more than 20G. All indices have the same settings: 2 replicas and 7 shards. Periodically (calculated experimentally), the node fails due to java heap space because of a specific index, which
occupies about 20G. If this index is deleted the problem disappears.
Questions:
- How to understand what elasticsearch exactly does before the node fails?
What influenced the process? - If we reduce the number of shards, will memory consumption on the nodes decrease? What will be affected?
- Could large documents in this particular index be the fail cause of the node?
- Does replicas quantity for the index affect the resource consumption during indexing? How much?
Thank you in advance for your answers!