GC running early?

I'm working on setting up a new cluster (new hardware), and I'm seeing unexpected heap behavior while doing some load testing. Elasticsearch 5.2.2, OpenJDK 8, SL7 (i.e. RHEL7). All nodes are configured with 16gb heap, verified in the logs and API, yet GC appears to be running around 1.5gb. This is under constant indexing, a mix of log-like and document-like (updating) behavior. One node stands out, but I don't see anything else unusual about it (it's not the master, and all nodes are configured identically with puppet). Any suggestions?


Complete JVM options:


Based on the graph it looks like your nodes are configured with 2GB heap, not 16GB. How did you install Elasticsearch? How are you starting it?

1 Like

Right, hence my confusion. API clearly shows 16gb though [1]. The nodes were installed and configured with the elastic/elasticseach puppet forge module, and are run under systemd. Now that I've loaded more data the heap usage has gone up on the busier nodes [2], so maybe it was just a matter of getting more activity. This is a much larger cluster than we were using before.


name       heap.current heap.max
esworker08        1.6gb   15.8gb
esworker26        1.5gb   15.8gb
esworker05        1.5gb   15.8gb
esworker31        2.4gb   15.8gb
esworker01        1.4gb   15.8gb
esworker19          1gb   15.8gb
esworker32        1.1gb   15.8gb
esworker13        2.2gb   15.8gb
esclient01          7gb   15.9gb
esworker15        1.1gb   15.8gb
esclient02          1gb   15.9gb
esworker24        2.2gb   15.8gb
esworker16        1.5gb   15.8gb
esworker06        1.1gb   15.8gb
esworker29        1.9gb   15.8gb
esworker09        1.9gb   15.8gb
esworker11        960mb   15.8gb
esworker20      856.4mb   15.8gb
esworker33        9.2gb   15.8gb
esworker28        1.2gb   15.8gb
esworker07        1.9gb   15.8gb
esworker03          1gb   15.8gb
esworker25        1.1gb   15.8gb
esworker14      832.5mb   15.8gb
esworker02        1.4gb   15.8gb


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.