I've been having a problem lately with my elastica stack that was installed on kubernetes. Well, when trying to visualize myself data in kiban when I select a longer period of time (more data) it pops up this error:
Request error: circuit_breaking_exception, [parent] Data too large, data for [indices:data/write/update[s]] would be [31381782974/29.2gb], which is larger than the limit of [30601641984/28.5gb], real usage: [31381782528/29.2gb], new bytes reserved: [446/446b], usages [inflight_requests=19944/19.4kb, request=2307688833/2.1gb, fielddata=186790714/178.1mb, eql_sequence=0/0b, model_inference=0/0b]
What is the best way to eliminate these errors?
add more ram? I currently have 3 nodes with 64GB RAM each (30GB on JVM)
add more nodes?
reduce the amount of data displayed on the dashboard?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.