We have our field data configured at 40% of available heap size (which is currently configured to 122 GB) which comes to 48.8 GB. But we are seeing evictions at a number of 835 and field data reaches 47.7 B easily.
Our heap usage under this condition is at 95% and we see long running full GCs with G1GC configured. ALso the CPU usage went upto 80% during this time.
I wanted to know how much load can every eviction cause on the system.
Yes, we have plans in line to move to 30GB nodes and scale horizontally. But befor ethat we wanted to understand what is consuming our heap so much, considering field data is limited to 48.8 GB per host and filter cache to 12.2 GB, that easily leaves room for 60 GB of heap available.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.