Is it safe to clear filesystem cache

hello everyone

When I import millions of data in bulk, I encountered a problem.
The memory usage of the cluster servers increased rapidly, from 50% to about 90%, and after finished importing ,the occupied memories come down very slowly

I analyzed the memory usage and found that the swap cache took up a lot of memory
for example .This is the output of the top command in a common situations. (through it is not the 90% situation)

Mem:, 198233800k, total, 87175692k, used, 111058108k, free, 408804k, buffers
Swap:, 0k, total, 0k, used, 0k, free, 10801060k, cached

You can see that swap cache takes up about 10G of memory, it's a little high

since it is filesystem cache, i found that i can use "echo 1 > /proc/sys/vm/drop_caches" to empty the cache to free the memory

There are four questions

  1. When the memory use is relatively high, for example 80%,90% , because of filesystem cache usage ,does the server need to be optimized?

  2. what do you recommend if you need to optimize? Is it safe to use the "echo 1 > /proc/sys/vm/drop_caches" method for elasticsearch app or the os?
    Can I use it regularly after importing data in bulk?

  3. when I use /_nodes/stats api to view the cluster state, i found the os/swap information is 0, is this right? for example
    mem: {
    total_in_bytes: 135207735296,
    free_in_bytes: 38961053696,
    used_in_bytes: 96246681600,
    free_percent: 29,
    used_percent: 71
    swap: {
    total_in_bytes: 0,
    free_in_bytes: 0,
    used_in_bytes: 0

Because I think the value should be ,for example, the "10801060k" above. already lock the memory by config "bootstrap.mlockall: true", why there is still swap memory?

I use the version of 5.4.0

hello everybody.
any clues ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.