Jvm heap size continuous increase

Elasticsearch version (bin/elasticsearch --version):
5.3.0
Plugins installed:
None
JVM version (java -version):
1.8.0_66
OS version (uname -a if on a Unix-like system):
centos 7
Linux localhost.localdomain 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Description of the problem including expected versus actual behavior:

We are running a one node elaticsearch cluster with 32GB heap size . The heap size is continous increasing even if there are no any indexing and query, so what is increasing in the heap? Is it a bug or it's normal for jvm?
see the img below from x-pack for detail jvm and index info.



Nothing seems abnormal here. Why do you think so?

BTW, you are using x-pack (monitoring) so you have indexing happening on your cluster.
Check GET _cat/indices?v.

Unless you set an external monitoring cluster (recommended).

Thanks for the quick reply. I am not sure why the heap size is still increasing after I stop indexing at about 20: 00. The heap size is still slowly increaing and dramaticly decline and slowly increasing. What makes this incease of the heap size?

  1. Is that a problem?
  2. Did you check what I said? What is the output of it?
  1. Yes, I am concerned about the jvm usage. I can not sleep if I can not find out the reason of the increase.

  2. Yes, I checked. It shows lots of indices

    green  open fulleth_smartprobe1_nic0_4h_2018011120 EnS4LY6MTNKZNDCPgZqSGQ 1 0  3000000     0  677.5mb  677.5mb
    green  open fulleth_smartprobe0_nic0_4h_2018010312 XZ-gKOs7R96cnGrkeega4A 1 0  3000000     0  687.1mb  687.1mb
    green  open fulleth_smartprobe0_nic0_4h_2018012808 6d-tMgUqRr-kqYiiEuALSQ 1 0  3000000     0  701.1mb  701.1mb
    green  open fulleth_smartprobe1_nic0_4h_2018010612 BCYjupeLTgyKv9DcwaG1Bw 1 0  3000000     0  715.2mb  715.2mb
    green  open fulleth_smartprobe2_nic0_4h_2018011016 rUWxrjxUSdudY1pJdN1pMA 1 0  3000000     0  674.8mb  674.8mb
    green  open fulleth_smartprobe4_nic0_4h_2018010312 p4rK0y7tSKGwMmoCHG-VsQ 1 0  3000000     0  670.9mb  670.9mb

Please don't post images of text as they are hardly readable and not searchable.

Instead paste the text and format it with </> icon. Check the preview window.

Could you edit your post and paste all the content please?

Can not paste all the content because it exceeds the max size...All indices are like what I posted before, and there are also indices from x-pack, like

yellow open   .monitoring-es-2-2018.02.01            U6h5VN5qS2KE0uNczWKT0Q   1   1    2441813         3328   1015.9mb       1015.9mb
yellow open   .monitoring-kibana-2-2018.02.03        VOOzRbp7RuKWcyr1nsHi9w   1   1       8638            0        2mb            2mb
yellow open   .monitoring-es-2-2018.02.06            qhi947-MQAWT_Ue1xABHDw   1   1   11095304         8985        4gb            4gb
yellow open   .monitoring-kibana-2-2018.02.01        2vK5kKxVQG6nAa-KXxTRUA   1   1       8637            0      2.1mb          2.1mb
yellow open   .monitoring-es-2-2018.02.04            C0_VsXFMSECr3gQa2CFhkg   1   1    4320019         6077      1.6gb          1.6gb
yellow open   .monitoring-es-2-2018.02.02            ieW7ZTgdSaWlwpAyk3iQjg   1   1    4140436         4856      1.6gb          1.6gb
yellow open   .monitoring-kibana-2-2018.02.02        -pULU5BSRbSBF3QsL8YkMQ   1   1       8638            0      2.1mb          2.1mb
yellow open   .monitoring-kibana-2-2018.02.07        aG0gHwK1S72p4fqfiJDs_g   1   1       8636            0      2.1mb          2.1mb
yellow open   .monitoring-es-2-2018.02.05            tAZiEawSR3qtYpYzRgeU3A   1   1    4611399        12916      1.6gb          1.6gb
yellow open   .monitoring-es-2-2018.02.08            uZcpzUCTQYKHn3Nym8OnUQ   1   1    4166667        20095      1.4gb          1.4gb

You can share as a gist on gist.github.com.

As you can see you have indexing operations happening all the time. So you can't say that you are not indexing anything.

As I said, having a dedicated monitoring cluster is better. BTW if you are a gold or platinum customer, this monitoring cluster is available for free on cloud.elastic.co.

All your "business" shards seem to be small (like less than 800mb). Is that on purpose?
How did you determine this 3000000 documents limit per shard?

How many shards you have in total?

Hi, I have 1458 indices, https://gist.github.com/junkainiu/e6a3f7e32fa9b25f7bead9ca81d1fd8f

That means the heap size increase is caused by the monitoring indices?
The document limit is set purposely for testing the jvm usage for the same total data size of elasticsearch node with different number of indices

May be. And probably by an ever growing cluster state.

May I suggest you look at the following resources about sizing:

https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing

Thanks for the resources.. I understand 1000+ indices is too many and 800MB per shard is too low for a 32GB size heap. A better architecture should be applied. I will try a dedicated monitoring cluster and try to find more information about the problem. Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.