Issues with jvm lock

Hello,
I have below issues,
after start Elasticsearch I see warnings

"message": "Unable to lock JVM Memory: error=12, reason=Cannot allocate memory"
"message": "This can result in part of the JVM being swapped out."
"message": "memory locking requested for elasticsearch process but memory is not locked"

I'm using Elasticsearch in docker.
version of ES is 7.11.1

I have issues that after some time of working with logstash I see errors in logstash that bulk request failed and probably ES is down for logstash.
I think this is related with error of lock jvm memory

Please check this

the problem in my case is that I don't have enabled bootstrap.memory_lock: true

What is the heap size of your vm's and how much is the RAM ?

Cannot allocate memory clearly means es is not able to get enough memory .

Do you see any high gc activities? Can you share the complete log file?

Also try disabling swap if that helps .

Hello,
My issue is as follow:
First of all I think that I have enough memory - please see an attach:


first of all I'm using below configuration from jvm.options:

## JVM configuration

################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms12g4g
## -Xms12gx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################

# Xms12g represents the initial size of total heap space
# Xms12gx represents the maximum size of total heap space

-Xms20g
-Xmx20g
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################

## GC configuration
#-XX:+UseConcMarkSweepGC
#-XX:CMSInitiatingOccupancyFraction=75
#-XX:+UseCMSInitiatingOccupancyOnly

## G1GC Configuration
# NOTE: G1 GC is only supported on JDK version 10 or later
# to use G1GC, uncomment the next two lines and update the version on the
# following three lines to your version of the JDK
# 10-13:-XX:-UseConcMarkSweepGC
# 10-13:-XX:-UseCMSInitiatingOccupancyOnly

#14-:-XX:+UseG1GC
#14-:-XX:G1ReservePercent=25
#14-:-XX:InitiatingHeapOccupancyPercent=30

-XX:+UseG1GC
-XX:G1ReservePercent=25
-XX:InitiatingHeapOccupancyPercent=30


## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps

# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data

# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log

## JDK 8 GC logging
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m

# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m

and also I spotted that I see errors with GC Overhead:

JvmGcMonitorService", "cluster.name": "docker-cluster", "node.name": "c509678ddb86", "message": "[gc][17967] overhead, spent [472ms] collecting in the last [1s]"

to be honest - I don't know why Elasticsearch cannot lock JVM memory at start, and I'm not shure if my jvm.options is correct

is that output in Mb ? (output of free -m command) ?

can you run this and share the output of :

GET _cat/nodes?v=true&h=name,node*,heap*
or
curl -XGET "http://<ip>:9200/_cat/nodes?v=true&h=name,node*,heap*"

below screenshot from heap memory, looks like ok
heap
but I founded abother issue,
I saw defines vm.swappiness=30 and I think this can be an issue, as I know it should have 1 or 0

Yes ,That's one of the option for disabling swap as mentioned earlier .

I have ulimit set as unlimited.
vm.swappiness=1 (I think it is correct)
set in Elasticsearch.yml parameter : bootstrap.memory_lock: true
And still have issue that cannot lock memory

Do You know what can I do more ?

I see also another issue:
Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536"

I don't know how can I increase RLIMIT ?

you can refer :

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.