Im using elasticsearch 7.10.2 single node. Im getting this
Data too large, data for [<http_request>] would be [3985754120/3.7gb], which is larger than the limit of [3910375833/3.6gb], real usage: [3985754120/3.7gb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=6668796/6.3mb, in_flight_requests=0/0b, model_inference=0/0b, accounting=3637590/3.4mb]: circuit_breaking_exception","statusCode":429,"error":"Too Many Requests"}
I increased heap size to 4GB. I have 24 GB RAM. Should I increase it further?
or Adding two more nodes , making it as 3 node elasticsearch cluster will help here?
Increasing the heap size as you have done is the right thing to do. In terms of adding a node it really depends. There are some considerations listed in this old topic that could help you decide. There's also some further advice on choices of nodes in the documentation that can help too.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.