I've read a lot, and I been doing some test with a 9-node Elasticsearch running on Docker. Each node in a different VM using an internal network, and I've configured 3 master nodes(no data)/2 client nodes(no data)/4 data nodes.
Right now we have ~130k documents and its working pretty good, but I can see that the Heap Used % is really high. Always arround 85/90%. My data nodes are SSD machines with 56GB of ram (limited to 28GB)
Is this normal? If I enter the detail of the Heap Used I can see that the 4 data nodes are always above 70%
Regarding monitoring, we are using this (https://github.com/s12v/newrelic-elasticsearch) integration with New Relic. In addition to the monitoring (also on new relic) of every VM. Do you consider necessary to install Marvel?
Hi! Try to issue curl -s "localhost:9200/_nodes/stats/?pretty" > nodes_stats.txt and look through nodes_stats.txt for values of memory_size_in_bytes and memory_in_bytes. So you can sum values by node and possible get the main memory eater and also compare to JVM heap usage.
Hi @matiasdecarli, I'm going to do some performance testing on my ES cluster too, so I'm pretty interested in your case.
Besides Marvel, you can install this ES plugin http://www.elastichq.org/ or just run it from your local pc and connect to the ES cluster. The Node Diagnostics may provide some helpful information on resource usage. On the ES node that ElasticHQ connects to, you need to add those too lines to elasticsearch.yml and restart ES
For testing purpose, it should be ok to set http.cors.allow-origin to *. For production, perhaps limit it to localhost with http.cors.allow-origin: /https?://localhost(:[0-9]+)?/
More info: HTTP | Elasticsearch Guide [2.1] | Elastic
Are you ES nodes running on Linux?
How big are you 130k documents?
With ElasticHQ, you will see how many GB of heap the ES nodes are using instead of only Heap %
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.