Relation between JVM Usage and multi node in Elasticsearch

Hi Team,

We’re encountering recurring circuit breaker issues in our Elasticsearch. Our current setup is as follows:
RAM: 32GB
JVM: 16GB

Previously, we had 16GB RAM and 8GB JVM but experienced the same circuit breaker issue. After increasing the resources to the current configuration, the problem re-appeared within six months, with JVM usage reaching 91%.

I'm doing research that elasticsearch could divide into some nodes. Would this kind of setup help reduce JVM usage workload in my case? also any advice what role should I configure between each node ?

Any advice or experience shared would be greatly appreciated.

Thank you

Hello!

I'm afraid that I cannot give you a silver-bullet advice, that will for sure resolve issues. Configuring memory settings and cluster setup (and shards, and many more) depends on your data, usage scenarios, and so on.
However, perhaps one of the recent Search Labs posts on troubleshooting Elasticsearch memory or documentation about high memory pressure could guide you?
I'm sorry you're facing the issue and my answer is mostly "it depends", but right now I know too little to aid you.

Good luck!

What is the full output of the cluster stats API?