When I import millions of data in bulk, I encountered a problem.
The memory usage of the cluster servers increased rapidly, from 50% to about 90%, and after finished importing ,the occupied memories come down very slowly
I analyzed the memory usage and found that the swap cache took up a lot of memory
for example .This is the output of the top command in a common situations. (through it is not the 90% situation)
Mem:, 198233800k, total, 87175692k, used, 111058108k, free, 408804k, buffers
Swap:, 0k, total, 0k, used, 0k, free, 10801060k, cached
You can see that swap cache takes up about 10G of memory, it's a little high
since it is filesystem cache, i found that i can use "echo 1 > /proc/sys/vm/drop_caches" to empty the cache to free the memory
There are four questions
When the memory use is relatively high, for example 80%,90% , because of filesystem cache usage ,does the server need to be optimized?
what do you recommend if you need to optimize? Is it safe to use the "echo 1 > /proc/sys/vm/drop_caches" method for elasticsearch app or the os?
Can I use it regularly after importing data in bulk?
when I use /_nodes/stats api to view the cluster state, i found the os/swap information is 0, is this right? for example
Because I think the value should be ,for example, the "10801060k" above.
4.es-process already lock the memory by config "bootstrap.mlockall: true", why there is still swap memory?
I use the version of 5.4.0