We were testing out elastic search and the appropriate heap space to provide it with. The box we are running it on currently is quite small. There are a couple of questions I had about haw the JVM memory usage is reported by
top on Linux
We started off by providing ES 384M of heap space (both Xmx and Xms values). Upon inserting a few indices and data into the cluster, we noticed that it is using a lot more space than what we have provided to the heap. We understand that there is stack, metaspace etc. that gets counted here but the actual usage seems excessive.
What we want to know is, does the Lucene file system cache also get reported as part of this number? If not does anybody know of a way to analyze what the exact memory layout of the JVM is (not jhat/jmap which allow us to analyze the heap only) or a way to figure out the Lucene cache size.
Here is the output of the
top command for reference:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27653 abcuser 20 0 2459m 675m 23m S 36.6 67.8 8:58.32 java <---- ES process 4959 abcuser 30 10 2236m 88m 4044 S 0.0 8.9 1:58.32 java 4832 defuser 30 10 1975m 49m 2980 S 0.3 5.0 5:54.86 java 3968 abcuser 20 0 113m 12m 2364 S 0.0 1.3 0:09.27 abc-linux 3969 abcuser 20 0 124m 10m 2368 S 0.0 1.0 0:09.92 abc-linux 32097 root 20 0 115m 6740 5672 S 0.0 0.7 0:00.00 sshd