I use mmapfs storage type, but es cluster doesn't use available memory

I have a es cluster with 70gb data, using mmapfs. There is a lot of available memory, but es cluster don't use so performance is not good

5 servers
-Xms32g -Xmx32g -XX:PermSize=1000m -XX:MaxPermSize=1000m -XX:NewSize=3000m -XX:MaxNewSize=3000m -XX:MaxDirectMemorySize=2000m
vm.max_map_count: 262144
max_file_descriptors: 512000

cluster info
Nodes: 5 Indices: 9 Shards: 80 Data: 71.56 GB CPU: 10% Memory: 7.73 GB / 158.54 GB

How can I force es to use available memory

mmapfs isn't memory based storage for your indices.
See https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-store.html

I know, but can I force as more as possible index data to memory using mmapfs ?
I have another es cluster using mmapfs, I think it take advantage of available memory so the search time is very low.

another es cluster info:
Nodes: 40 Indices: 14 Shards: 868 Data: 265.25 GB CPU: 6300% Memory: 1.32 TB / 2.72 TB

It doesn't index to memory, that is what I mean. It might just be terminology but it's important here.

Heap, and memory, is used in different ways so this isn't something that can be easily answered.
What sort of data is it? What sort of queries? How often do they run? Are you using warmers?

It doesn't index to memory, but I think the index using mmapfs is also in the memory

The MMap FS type stores the shard index on the file system (maps to Lucene MMapDirectory) by mapping a file into memory (mmap). Memory mapping uses up a portion of the virtual memory address space in your process equal to the size of the file being mapped. Before using this class, be sure you have allowed plenty of virtual address space.

There are many warmer but feeling useless, the search time is not low as my another es cluster

Yes, mmapfs is using virtual memory. It is not using memory. That is an important difference.

Why do you think that you can blame mmapfs?

Look at slowest part of I/O and bottlenecks first and what workload pattern you exercise on the machines, they all create different requirements.

Note that -XX:MaxDirectMemorySize=2000m is a very tight restriction. Why do you select this setting?

Also you should not change GC settings, it will make your JVM much slower.

I don't blame mmapfs, I am sure mmapfs type is high performance because of my another es cluster. I am just curious to why es process not to take available memory. Although I set the xms jvm args

Now I solve the problem:

ulimit -m unlimited

Then everything is ok

This is just a temporary solution. For permanent change, please add the following to /etc/security/limits.conf

es soft memlock unlimited
es hard memlock unlimited

where es is the assumed Elasticsearch user. Do not forget to re-login the user to enable the settings.