ElasticSearch 2.1.1 RAM Issue

Hi I have strange issue with ES RAM.

free -m shows less RAM but KOPF plugin says good free RAM. Unable to figure out how to debug.

ES Version = 2.1.1 - Machine has 8 GB RAM alloted ....Java xmx has 4GB configured.

ubuntu@vm01:~$ free -m
total used free shared buffers cached
Mem: 7983 7639 344 0 313 775
-/+ buffers/cache: 6550 1433
Swap: 0 0 0

KOPF Plugin shows heap usage around 4% (Attached same)

ubuntu@vm01:~$ cat /proc/meminfo
MemTotal: 8175564 kB
MemFree: 351492 kB
Buffers: 321260 kB
Cached: 794728 kB
SwapCached: 0 kB
Active: 1021088 kB
Inactive: 346700 kB
Active(anon): 274756 kB
Inactive(anon): 284 kB
Active(file): 746332 kB
Inactive(file): 346416 kB
Unevictable: 5197136 kB
Mlocked: 5197176 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 6488 kB
Writeback: 0 kB
AnonPages: 5448928 kB
Mapped: 46232 kB
Shmem: 360 kB
Slab: 1186336 kB
SReclaimable: 1172644 kB
SUnreclaim: 13692 kB
KernelStack: 1744 kB
PageTables: 15824 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4087780 kB
Committed_AS: 5433540 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 14868 kB
VmallocChunk: 34359719707 kB
HardwareCorrupted: 0 kB
AnonHugePages: 5285888 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 30720 kB
DirectMap2M: 8488960 kB

Help would be appreciated.



I fear I don't understand what are you trying to achieve. free shows system-wide memory usage whereas the Kopf plugin only shows Java heap usage of the Elasticsearch process. First, Elasticsearch stores all kinds of things off-heap (in the OS filesystem cache) which you will simply not see in Kopf and second your operating system and other processes will also need memory. I hope this points you in the right direction.


Hi Daniel,

Thanks a lot.

Then my concern is off-heap memory consumed by ES.
Total I have 8GB on box.

Java Heap 4GB allotted (this is pretty much fine)

Ubuntu OS + ES Off Heap = 4GB (This is where I get problem)

How to address this problem?

How to know how much off heap ES is consuming ?

What is the best configuration of ES to have better off heap usage ?

Your inputs helps a lot. Thanks again.

I have below configuration from elasticsearch.yml
bootstrap.mlockall: true
action.disable_delete_all_indices: true
gateway.expected_nodes: 1
index.routing.allocation.disable_allocation: false
indices.fielddata.cache.size: 40%
network.publish_host: non_loopback:ipv4
node.data: true
node.master: true


well, all of that looks quite ok:

The "Cached" output is the OS page cache, which indicates that just around 800MB are in use. You cannot really attribute that to a single process but I think it is still reasonable to say that most of that is used by ES. By the way, the page cache is entirely managed by the OS. It decides when entries need to be evicted, so there is no setting in Elasticsearch which you can tune. Also on OS level, it is not easily / directly changeable.

"Mlocked" indicates the amount of (Heap) memory that ES has locked (i.e. preventing the OS to swap it out). It's more than 4GB, maybe the JVM is allocating also some space in addition what is set by -Xmx. It's also possible that other processes mlock memory (as meminfo just provides a system-global view).

"Slab" is Kernel reserved memory and a little bit over 1GB.

Let us do a quick back of the envelope calculation: the three before-mentioned parts together account for roughly 800MB + 5GB + 1GB = 6.8GB. You have 350MB free, leaving around 800 - 900 MB for all the rest of the system, which is not too much. You can try to reduce the heap size or allocate more memory to the machine. Off-heap memory usage is definitely not your problem here.

I hope that helps.


1 Like

Thanks a lot for details explanation. Reduce heap size worked.

I have below setup
ES Client Node ( is front facing. All requests read/write goes to Client Node)

ES Data Nodes (3) ( no outside interaction. Only client talks to data nodes)

I guess with this kind of setup ES Client Node might need more RAM than Data Nodes.

Thanks again.