High Memory buffer/cache Kubernetes

Describe the bug
i tried to start a cluster in kubernetes with below nodes while setting pods memory limits 3G
2 clients nodes
3 masters nodes
3 Data nodes
The problem that i found is that elastic pods dont see this limit and keep using the buffer/cache memory in the node which i see it increasing constatly can go beyond 7G, which lead to no memory in the nodes then when a pod is dead cant start again coz there is no memory , so i have to run echo 2 to drop cache to free the memory.

is their a way to control how much elastic cache/buffer use cache?
Version of Helm and Kubernetes :
k8S v1.13.5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.