ElasticSearch use too much Memory then cause sys cpu too high

I use ElasticSearch to store some datetime based data. Data of each day is stored as a index. The total storage of 75 is about 600GB.
when ElasticSearch startup, the proccess's VIRT is up to 145GB. Then when I do lots of aggregation, the proccess's RES and SHR continue growing up quickly. After RES reach 40GB, it will cause the sy CPU growing upto 90% and stay at this level.
At that time, the server become very slow and need a long time to respone any command. All I can do is to kill the proccess and restart it.

What can I do to solve this problem? anybody have had the same problem?

Detail as below:
One server, 64GB RAM
config as one node, Heap size Max 16GB, Direct Memory Max 16GB
storage default_fs

The only thing you should worry about is heap use. Are you monitoring this?

You should also make sure that you do not have swapping enabled, as this can cause performance problems.

yes.I set the head size max 16GB. And I have monitored the heap with jmap. the usage and gc is both ok

I also doubt it is caused by swapping. I Will try to disable it

It doesn't work. The same things happen again.

Which version of Elasticsearch are you using?

2.3.4

This Problem has solved after I use niofs instead of mmapfs.

I have too much data , so the total storage of index files is very big. And my search/aggregation requests will use much of these data. In another word , the working set or the hot data for read is too big.

So if I use mmapfs, it will load many pages of index files into mem.
And after the mem usage exceed a threshold, if you need more file into mem, but mem is not enough, it will cause sys to do swap.
This kind of swap seems not to be controled by the mlock. It use lots of sys cpu, so the system can't respene for anything.

If your working set or hot data is bigger than the capacity of mem, I suggest you use niofs intstead of mmapfs.

This is not a good recommendation.
You should disable swap as per the best practises.

1 Like

How about my whole data storage is hot data? It is nearly 400GB and I only have 64GB mem.

How about my whole data storage is hot data? It is nearly 400GB and I only have 64GB mem.

You do not need to store all your data in memory, if you do(or think you do, then perhaps you should consider an in-memory data store.

ES will work perfectly fine with more data than memory.

In fact, mmapfs use mmap to enhance reading speed of index file. But after mapping the file, whether to load the file into mem, it is decided by the OS, not es. Once you access the file for the first time, it will be loaded into mem.

so after more and more cold data for the first time, the usage of mem is growing and finnally cause swap. I think you can try this situation.

I use memory_lock setting to lock the mem use by JVM, but the mem used by mmap seems to be one kind of share memory, and it is not controlled by this setting.

I didn't try the swapoff command. I don't what will happend when the growing mem usage meet the swapoff.

mongodb also use mmap. it also have this problem when hot data it is bigger than memory