Heap used always > 85%

Hi guys, im new to this forum & to Elasticsearch!

I've read a lot, and I been doing some test with a 9-node Elasticsearch running on Docker. Each node in a different VM using an internal network, and I've configured 3 master nodes(no data)/2 client nodes(no data)/4 data nodes.

Right now we have ~130k documents and its working pretty good, but I can see that the Heap Used % is really high. Always arround 85/90%. My data nodes are SSD machines with 56GB of ram (limited to 28GB)

Is this normal? If I enter the detail of the Heap Used I can see that the 4 data nodes are always above 70%

Thanks in advance

Welcome!

What sort of data is it? Do you have monitoring - eg Marvel - in place?

Hi Mark! Thanks for your reply!

We are storing objects data types

Regarding monitoring, we are using this (https://github.com/s12v/newrelic-elasticsearch) integration with New Relic. In addition to the monitoring (also on new relic) of every VM. Do you consider necessary to install Marvel?

Thanks in advance

Hi! Try to issue curl -s "localhost:9200/_nodes/stats/?pretty" > nodes_stats.txt and look through nodes_stats.txt for values of memory_size_in_bytes and memory_in_bytes. So you can sum values by node and possible get the main memory eater and also compare to JVM heap usage.

This is what I see everytime

My elasticsearch.yml

network.publish_host: [VM ip]
network.bind_host: 0.0.0.0

script.engine.groovy.inline.update: on

node.master: [...]
node.data: [...]
node.client: [...]

cluster.name: [cluster name]
node.name: [node name]

indices.cluster.send_refresh_mapping: false
bootstrap.mlockall: true
action.disable_delete_all_indices: true

gateway.expected_nodes: 9
gateway.recover_after_time: 5m

indices.fielddata.cache.size: 75%
indices.breaker.fielddata.limit: 40%

discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: [ips]

Marvel provides other info, like query and indexing rates, merging etc etc, all which can have an impact.

Ok, will install Marvel today. Any particular metric that I should look after regarding the +heap used?

Does my config looks good?

What is the heap size you set on each node? 28G?

Yes. Each data node have 56GB, and they are set to use 28 GB as suggested

Hi @matiasdecarli, I'm going to do some performance testing on my ES cluster too, so I'm pretty interested in your case.

Besides Marvel, you can install this ES plugin http://www.elastichq.org/ or just run it from your local pc and connect to the ES cluster. The Node Diagnostics may provide some helpful information on resource usage. On the ES node that ElasticHQ connects to, you need to add those too lines to elasticsearch.yml and restart ES

http.cors.enabled: true
http.cors.allow-origin: "*"

For testing purpose, it should be ok to set http.cors.allow-origin to *. For production, perhaps limit it to localhost with http.cors.allow-origin: /https?://localhost(:[0-9]+)?/
More info: HTTP | Elasticsearch Guide [2.1] | Elastic

Are you ES nodes running on Linux?
How big are you 130k documents?

With ElasticHQ, you will see how many GB of heap the ES nodes are using instead of only Heap %

This part

indices.fielddata.cache.size: 75%
indices.breaker.fielddata.limit: 40%

may be the reason why your heap usage is that high.

Hi @anhlqn. Sounds great! Let me set this today and get back to you with some real data!

http.cors.allow-origin: "*"

Need to double quote the asterisk. Otherwise, you can't start ES

Hi Mark. I have Marvel in place. Any particular metric that I should look after?

I have ElasticHQ & Marvel in place.