Hello,
We have a problem where one of our Elastic search nodes runs out of memory and crashes. In the process, the whole (two-node) cluster stops functioning. What can we do to make it so if one node fails, it does not affect the other? All shards are replicated on both nodes.
I realize we are probably cutting it too close with regard to memory, which is most likely why the first node goes down -- the index is 8.1 GB and each node has 8 GB RAM. Are there any specific memory requirements for Elastic Search? I've been searching quite a bit and have not been able to find anything on system requirements for the application. I presume having at least as much memory as the index is large would be a good place to start but it would probably be best to know what is proper practise.
Thanks for any help you can provide!
Mike