Elasticsearch overhead all the time

I deployed ELK in one node, and configured the Elasticsearch's memory with 10G, no other changes for ES.
But every time, when I perform the query from Kibana, the ES always reported the error as :slight_smile:
[2018-11-14T10:38:42,660][INFO ][o.e.g.GatewayService ] [ug_uKSM] recovered [2] indices into cluster_state
[2018-11-14T10:38:43,489][INFO ][o.e.c.r.a.AllocationService] [ug_uKSM] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logmonster][2]] ...]).
[2018-11-14T10:39:34,222][INFO ][o.e.m.j.JvmGcMonitorService] [ug_uKSM] [gc][55] overhead, spent [672ms] collecting in the last [1.5s]
[2018-11-14T10:39:35,469][WARN ][o.e.m.j.JvmGcMonitorService] [ug_uKSM] [gc][56] overhead, spent [817ms] collecting in the last [1.2s]
[2018-11-14T10:39:36,631][WARN ][o.e.m.j.JvmGcMonitorService] [ug_uKSM] [gc][57] overhead, spent [808ms] collecting in the last [1.1s]
[2018-11-14T10:39:37,751][WARN ][o.e.m.j.JvmGcMonitorService] [ug_uKSM] [gc][58] overhead, spent [822ms] collecting in the last [1.1s]
[2018-11-14T10:39:38,782][WARN ][o.e.m.j.JvmGcMonitorService] [ug_uKSM] [gc][59] overhead, spent [705ms] collecting in the last [1s]
[2018-11-14T10:39:39,783][WARN ][o.e.m.j.JvmGcMonitorService] [ug_uKSM] [gc][60] overhead, spent [618ms] collecting in the last [1s]
[2018-11-14T10:39:40,784][WARN ][o.e.m.j.JvmGcMonitorService] [ug_uKSM] [gc][61] overhead, spent [716ms] collecting in the last [1s]
[2018-11-14T10:39:41,785][WARN ][o.e.m.j.JvmGcMonitorService] [ug_uKSM] [gc][62] overhead, spent [504ms] collecting in the last [1s]
[2018-11-14T10:39:42,786][INFO ][o.e.m.j.JvmGcMonitorService] [ug_uKSM] [gc][63] overhead, spent [452ms] collecting in the last [1s]

anything else I missed? any one can give some comments for this problem?
Thanks in advance.

What version are you on?
How many shards, how many indices, how many GB of data?

The ES version is 6.4.1, and just one index, and one node, around 100G memory in my node, but in ES's configuration, allocated 10G for ES:
[root@localhost logs]# cat /home/11thone/elk/elasticsearch/config/jvm.options
## JVM configuration

################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms10g
-Xmx10g

################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################

## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly

## optimizations
# pre-touch memory pages used by the JVM during initialization
-XX:+AlwaysPreTouch

## basic

# explicitly set the stack size
-Xss5m

# set to headless, just in case
-Djava.awt.headless=true

# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8

# use our provided JNA always versus the system one
-Djna.nosys=true

# turn off a JDK optimization that throws away stack traces for common
# exceptions because stack traces are important for debugging
-XX:-OmitStackTraceInFastThrow

# flags to configure Netty
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0

# log4j 2
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true

-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps

# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data

# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log

## JDK 8 GC logging

8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m

# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
# due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT 
otherwise
# time/date parsing will break in an incompatible way for some date patterns and locals
9-:-Djava.locale.providers=COMPAT

# temporary workaround for C2 bug with JDK 10 on hardware with AVX-512
10-:-XX:UseAVX=2

Can you please use the format button - </> - on that code, it's very hard to read.

Hi, warkolm

Thanks, I re-edited it, please help check it.

Hi, Warkolm

Finally, I still cannot found out the reason of this problem, hence I have to re-install a new version of ELK stack. after that, it's back to work.

Anyway, thanks for your focus and help, have a nice day.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.