Elasticsearch getting killed by the oom killer because an out of memory

Good morning,

We are facing the problem "Out Of Memory" with elasticsearch.

Our configuration :
We are running on ec2 instance (tg4.micro) with 1gb ram and 1cpu. I know that perhaps it is not sufficient but i want to get more advice.

We are connecting elasticsearch with kibana, logstash and apm to implement application monitoring.

Elasticsearch configuration : (version 7.17)

bootstrap.memory_lock: false
http.port: 9200
network.host: 0.0.0.0
transport.host: localhost
transport.tcp.port: 9300
xpack.security.authc.api_key.enabled: true

#################################### Paths ####################################
# Path to directory containing configuration (this file and logging.yml):
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
action.auto_create_index: true
xpack.security.enabled: true

jvm configuration :

## GC configuration
8-13:-XX:+UseConcMarkSweepGC
8-13:-XX:CMSInitiatingOccupancyFraction=75
8-13:-XX:+UseCMSInitiatingOccupancyOnly

## G1GC Configuration
# NOTE: G1 GC is only supported on JDK version 10 or later
# to use G1GC, uncomment the next two lines and update the version on the
# following three lines to your version of the JDK
# 10-13:-XX:-UseConcMarkSweepGC
# 10-13:-XX:-UseCMSInitiatingOccupancyOnly
14-:-XX:+UseG1GC


## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps

# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=/var/lib/elasticsearch

# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=/data/elasticsearch/logs/hs_err_pid%p.log

## JDK 8 GC logging

8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:/data/elasticsearch/logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m

# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=/data/elasticsearch/logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m

The swapping mechanism is disabled using : sudo swapoff -a.

I need some advice about the configuration, what i should improve and add ?

Welcome to our community! :smiley:

Yep that's pretty much the TLDR of it all, that's not really enough to effectively run Elasticsearch and there are no magic settings or flags that we can give you to get around this. You need to at least double the size of your node, but we generally suggest running Elasticsearch with 2GB of heap at a min.

Yeah i think that it is the best thing to do.

But i have another option and i want to know your opinion : Can i for example enable swapping to file on disk and don't lock the memory ?

i know that it is not efficient to do this but i want to know if it is possible to do just to know ?
thanks for ur valuable answers.

We are connecting elasticsearch with kibana, logstash and apm to implement application monitoring.

This sentence is a bit unclear, if you're trying to run all of these different applications on a single t4g.micro, this will probably never work (you'd probably need at least 4GB of RAM). If you're only running Elasticsearch on the EC2 instance, I'd recommend against going with using swap at the size, the EC2 would probably spend more time swapping than doing real work.

I think you might have 2 options:

  1. See if you can upgrade to a newer version in the 8.x series. There have been some resource efficiency improvements here that might allow you to run Elasticsearch in <1GB of RAM.
    • I personally currently run some test clusters at 1250MB of RAM (~625MB Heap), but I'd assume they could probably run with less.
  2. If you're unable to upgrade, I'd recommend trying to upgrade the EC2 instance to t4g.small and see if it works. While costing more $, it would provide for an overall better experience that dealing with swap.

We explicitly tell you to disable swap, so no.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.