java.lang.OutOfMemoryError Kubernetes GKE

Morning All,

I am running Elastricsearch & Kibana with Fluentd in Kubernetes on the GKE.
Each node in the cluster is running kubernetes 1.8.5 and has 4 vCPUs, 15 GB memory.

I am running ES v6.0.0 with the following config.

cluster.name: "kubernetes-cluster"
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
http.cors.enabled: true
http.cors.allow-origin: "*"

And the following JVM arguments

-XX:+UseConcMarkSweepGC, 
-XX:CMSInitiatingOccupancyFraction=75, 
-XX:+UseCMSInitiatingOccupancyOnly, 
-XX:+AlwaysPreTouch, 
-Xss1m, 
-Djava.awt.headless=true, 
-Dfile.encoding=UTF-8, 
-Djna.nosys=true, 
-XX:-OmitStackTraceInFastThrow, 
-Dio.netty.noUnsafe=true, 
-Dio.netty.noKeySetOptimization=true, 
-Dio.netty.recycler.maxCapacityPerThread=0, 
-Dlog4j.shutdownHookEnabled=false, 
-Dlog4j2.disable.jmx=true, 
-XX:+HeapDumpOnOutOfMemoryError, 
-Des.cgroups.hierarchy.override=/, 
-Xms2048m, -Xmx2048m, 
-Des.path.home=/usr/share/elasticsearch, 
-Des.path.conf=/usr/share/elasticsearch/config

Occasionally the ES container with get stuck in a crash loop and keep restarting. Going through the logs the exception below is thrown just before ES crashes and restarts.

I hit the character count limit so put the exception in a gist.

Please let me know if more info is needed.

-- Rich

You seem to be having very long GC cycles, so you're clearly short of heap. Since you have 15GB of RAM on each node, you can try to augment your heap to 7GB instead of 2GB and see if it works better.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.