Hello,
I have below issues,
after start Elasticsearch I see warnings
"message": "Unable to lock JVM Memory: error=12, reason=Cannot allocate memory"
"message": "This can result in part of the JVM being swapped out."
"message": "memory locking requested for elasticsearch process but memory is not locked"
I'm using Elasticsearch in docker.
version of ES is 7.11.1
I have issues that after some time of working with logstash I see errors in logstash that bulk request failed and probably ES is down for logstash.
I think this is related with error of lock jvm memory
first of all I'm using below configuration from jvm.options:
## JVM configuration
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms12g4g
## -Xms12gx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################
# Xms12g represents the initial size of total heap space
# Xms12gx represents the maximum size of total heap space
-Xms20g
-Xmx20g
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
#-XX:+UseConcMarkSweepGC
#-XX:CMSInitiatingOccupancyFraction=75
#-XX:+UseCMSInitiatingOccupancyOnly
## G1GC Configuration
# NOTE: G1 GC is only supported on JDK version 10 or later
# to use G1GC, uncomment the next two lines and update the version on the
# following three lines to your version of the JDK
# 10-13:-XX:-UseConcMarkSweepGC
# 10-13:-XX:-UseCMSInitiatingOccupancyOnly
#14-:-XX:+UseG1GC
#14-:-XX:G1ReservePercent=25
#14-:-XX:InitiatingHeapOccupancyPercent=30
-XX:+UseG1GC
-XX:G1ReservePercent=25
-XX:InitiatingHeapOccupancyPercent=30
## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data
# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log
## JDK 8 GC logging
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m
# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
and also I spotted that I see errors with GC Overhead:
JvmGcMonitorService", "cluster.name": "docker-cluster", "node.name": "c509678ddb86", "message": "[gc][17967] overhead, spent [472ms] collecting in the last [1s]"
to be honest - I don't know why Elasticsearch cannot lock JVM memory at start, and I'm not shure if my jvm.options is correct
I have ulimit set as unlimited.
vm.swappiness=1 (I think it is correct)
set in Elasticsearch.yml parameter : bootstrap.memory_lock: true
And still have issue that cannot lock memory
Do You know what can I do more ?
I see also another issue: Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536"
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.