We have an issue regarding caching in Elasticsearch. Whenever a node in our cluster results in a Java: Out of memory error, a .hprof file doesn't get created rather the whole heap is dumped into the cache. On a node with 16GB RAM, cache was as high as 12GB when it went out of memory. We have to clear the cache manually to get it up and running again.
Even the contents of "/usr/share/elasticsearch/logs" are empty, which means .hprof file wasn't created. We had enough memory in the folder location (around 500GB) to take a full heap dump. ".hprof" file gets created in ES version 1.7
Version being used of ES: 2.3.3, is it a limitation with this. If yes, Whats the alternative?
This is the output, only elasticsearch search process is running.
elastic-1@elastic-1:~$ free -h
total used free shared buffers cached
Mem: 15G 15G 167M 784K 49M 14G
-/+ buffers/cache: 1.2G 14G
Swap: 15G 1.4M 15G