Elasticsearch node consuming almost 100% memory

Hi team,

I am using 3 elastic search node with graylog which is running in separate host container. today I was checking some grafana board which is implemented to check elastisearch performance then I found elastisearch is consuming almost 99 % memory and I am unable to find the reason.

retention time: 30 days
Logs per day: 60-70 Millions/60 GB per day
Core: 8
RAM: 32 GB
JVM_heap: 16 GB allowed
HARD DISK: 2 TB

root@c8a9XXXXX:/opt/elasticsearch/logs# curl -s localhost:9200/_prometheus/metrics | grep -i es_os_mem_used_bytes
# HELP es_os_mem_used_bytes Memory used
# TYPE es_os_mem_used_bytes gauge
es_os_mem_used_bytes{cluster="graylog2",node="node01",nodeid="0d2puydfTROOW7XJqL75Yw",} 3.2914161664E10     

root@c8axxxxx:/opt/elasticsearch/logs# free -hm
         total       used       free     shared    buffers     cached
Mem:           31G        30G       611M       852K         0B       9.0G
-/+ buffers/cache:        21G       9.6G
Swap:         1.0G       927M        96M

Some other settings already applied to elasticsearch

    action.auto_create_index: .watches,.triggered_watches,.watcher-history-*
    Preformatted textField type refresh interval:30 seconds
    Disable index optimization after rotation: ture

Any advice will be appreciated

Elasticsearch relies on the operating system page cache for performance and that will use memory in addition to the heap. If this is needed by other processes it will however be released by the operating system so this is normal and not a problem.

ok that quite informative, actually i was afraid because recently i have increased the retention time and i can see elasticsearch creating a bottleneck for graylog. An output buffer (which sends the data to elasticsearch after all the manipulation) is start getting full. Previously some messages are lost because of output and process buffer are filled completely where each can hold up to 65 k message. I know this number is high but 800 Logs per seconds will take minutes to fill this buffer and after all optimization which is already shared on the above comment, i am not losing any logs but i am afraid i can see some buffer are still getting filled some times.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.