OOMKilled & Crashloopbackoff error for elasticsearch

Hi Team,

I am trying to use official helm chart to build elasticsearch cluster in k8s cluster. I used same helm chart with CentOS atomic OS and successfully built elasticsearch cluster. However, when I am trying to use same helm chart with RHEL 7.5 Version, I noticed, elasticsearch pods go into Running-->OOMKIlled-->crashloopbackoff state. Not sure, what is wrong with the values.yaml file configuration. Same helm chart works fine with Centos atomic OS. Pls guide do i have to change any setting on RHEL 7.5 OS.

We have around 250GB of RAM on each worker node.

I am using elasticsearch 7.5.2 version and default JAVAOPTS.

image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "7.5.2"
imagePullPolicy: "IfNotPresent"

podAnnotations: {}
  # iam.amazonaws.com/role: es-cluster

# additionals labels
labels: {}

esJavaOpts: "-Xmx1g -Xms1g"

I am not seeing any error when I execute dmesg command on master nodes. I can see below message on worker nodes.

Kernel version

[root@cesiumk8s-elk1 ~]# uname -a
Linux cesiumk8s-elk1.xxx.com 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@cesiumk8s-elk1 ~]# uname -r
3.10.0-862.el7.x86_64
[root@cesiumk8s-elk1 ~]#


[530710.804005] Memory cgroup out of memory: Kill process 56296 (java) score 1990 or sacrifice child
[530710.805416] Killed process 55956 (java) total-vm:2608264kB, anon-rss:2082960kB, file-rss:5824kB, shmem-rss:0kB
[530742.811881] java invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=993
[530742.811886] java cpuset=8697e9c6cba52723e94ab606d496a3e068b1d1e16be1c672d54d68110e5ab900 mems_allowed=0-1
[530742.811889] CPU: 29 PID: 479 Comm: java Kdump: loaded Tainted: G               ------------ T 3.10.0-862.el7.x86_64 #1
[530742.811891] Hardware name: Cisco Systems Inc UCSC-C240-M4SX/UCSC-C240-M4SX, BIOS C240M4.4.0.1d.0.1005181458 10/05/2018
[530742.811892] Call Trace:
[530742.811901]  [<ffffffff9010d768>] dump_stack+0x19/0x1b
[530742.811903]  [<ffffffff901090ea>] dump_header+0x90/0x229
[530742.811910]  [<ffffffff8fb97456>] ? find_lock_task_mm+0x56/0xc0
[530742.811914]  [<ffffffff8fc0b1f8>] ? try_get_mem_cgroup_from_mm+0x28/0x60
[530742.811916]  [<ffffffff8fb97904>] oom_kill_process+0x254/0x3d0
[530742.811919]  [<ffffffff8fc0efe6>] mem_cgroup_oom_synchronize+0x546/0x570
[530742.811921]  [<ffffffff8fc0e460>] ? mem_cgroup_charge_common+0xc0/0xc0
[530742.811924]  [<ffffffff8fb98194>] pagefault_out_of_memory+0x14/0x90
[530742.811926]  [<ffffffff9010720c>] mm_fault_error+0x6a/0x157
[530742.811929]  [<ffffffff9011a886>] __do_page_fault+0x496/0x4f0
[530742.811931]  [<ffffffff9011a915>] do_page_fault+0x35/0x90
[530742.811935]  [<ffffffff90116768>] page_fault+0x28/0x30
[530742.811938] Task in /kubepods/burstable/pod24a8a7ea-b704-4542-943e-4bb11a0ff9df/8697e9c6cba52723e94ab606d496a3e068b1d1e16be1c672d54d68110e5ab900 killed as a result of limit of /kubepods/burstable/pod24a8a7ea-b704-4542-943e-4bb11a0ff9df
[530742.811941] memory: usage 2097152kB, limit 2097152kB, failcnt 3651
[530742.811942] memory+swap: usage 2097152kB, limit 9007199254740988kB, failcnt 0
[530742.811943] kmem: usage 15412kB, limit 9007199254740988kB, failcnt 0
[530742.811944] Memory cgroup stats for /kubepods/burstable/pod24a8a7ea-b704-4542-943e-4bb11a0ff9df: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
[530742.811960] Memory cgroup stats for /kubepods/burstable/pod24a8a7ea-b704-4542-943e-4bb11a0ff9df/e64e6862ec7f1c2a40af2d5abe56719cc323b6a96e8c799779d399a33109617e: cache:0KB rss:40KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:40KB inactive_file:0KB active_file:0KB unevictable:0KB
[530742.811979] Memory cgroup stats for /kubepods/burstable/pod24a8a7ea-b704-4542-943e-4bb11a0ff9df/8697e9c6cba52723e94ab606d496a3e068b1d1e16be1c672d54d68110e5ab900: cache:40KB rss:2081660KB rss_huge:2066432KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:2081660KB inactive_file:40KB active_file:0KB unevictable:0KB
[530742.811992] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
[530742.812260] [54433]  1000 54433      253        1       4        0          -998 pause
[530742.812266] [57057]  1000 57057   652066   521866    1052        0           993 java
[530742.812268] Memory cgroup out of memory: Kill process 479 (java) score 1989 or sacrifice child
[530742.813666] Killed process 57057 (java) total-vm:2608264kB, anon-rss:2081636kB, file-rss:5828kB, shmem-rss:0kB


Thanks,
Kasim Shaik

Hi Team,

Any updated from folks?

I increased CPU and memory resources for Elasticsearch sts pods.

resources:
  requests:
    cpu: "8000m"
    memory: "16Gi"
  limits:
    cpu: "16000m"
    memory: "32Gi"

I noticed following stack trace.

"stacktrace": ["org.elasticsearch.transport.ConnectTransportException: [elasticsearch-master-1][10.233.71.34:9300] connect_exception",
"at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:989) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.ActionListener.lambda$toBiConsumer$3(ActionListener.java:162) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.common.concurrent.CompletableContext.lambda$addListener$0(CompletableContext.java:42) ~[elasticsearch-core-7.5.2.jar:7.5.2]",
"at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) ~[?:?]",
"at java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:883) ~[?:?]",
"at java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2322) ~[?:?]",
"at org.elasticsearch.common.concurrent.CompletableContext.addListener(CompletableContext.java:45) ~[elasticsearch-core-7.5.2.jar:7.5.2]",
"at org.elasticsearch.transport.netty4.Netty4TcpChannel.addConnectListener(Netty4TcpChannel.java:121) ~[?:?]",
"at org.elasticsearch.transport.TcpTransport.initiateConnection(TcpTransport.java:299) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:266) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.transport.ConnectionManager.internalOpenConnection(ConnectionManager.java:245) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.transport.ConnectionManager.connectToNode(ConnectionManager.java:140) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:370) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:354) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.cluster.NodeConnectionsService$ConnectionTarget$1.doRun(NodeConnectionsService.java:307) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.5.2.jar:7.5.2]",
"at java.util.ArrayList.forEach(ArrayList.java:1507) ~[?:?]",
"at org.elasticsearch.cluster.NodeConnectionsService.connectToNodes(NodeConnectionsService.java:137) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.cluster.service.ClusterApplierService.connectToNodesAndWait(ClusterApplierService.java:504) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:472) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:432) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.cluster.service.ClusterApplierService.access$100(ClusterApplierService.java:73) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:176) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:703) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]",
"at java.lang.Thread.run(Thread.java:830) [?:?]",
"Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: 10.233.71.34/10.233.71.34:9300",
"Caused by: java.net.ConnectException: Connection refused",
"at sun.nio.ch.Net.pollConnect(Native Method) ~[?:?]",
"at sun.nio.ch.Net.pollConnectNow(Net.java:579) ~[?:?]",
"at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:820) ~[?:?]",
"at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330) ~[?:?]",
"at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[?:?]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:688) ~[?:?]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:600) ~[?:?]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:554) ~[?:?]",
"at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) ~[?:?]",
"at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050) ~[?:?]",
"at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]",
"... 1 more"] }


"stacktrace": ["org.elasticsearch.transport.NodeNotConnectedException: [elasticsearch-master-1][10.233.71.34:9300] Node not connected",
"at org.elasticsearch.transport.ConnectionManager.getConnection(ConnectionManager.java:189) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.transport.TransportService.getConnection(TransportService.java:617) ~[elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:589) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.start(TransportNodesAction.java:182) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:82) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:51) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:153) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$apply$0(SecurityActionFilter.java:86) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$authorizeRequest$4(SecurityActionFilter.java:172) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.authz.AuthorizationService.authorizeSystemUser(AuthorizationService.java:378) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.authz.AuthorizationService.authorize(AuthorizationService.java:186) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.authorizeRequest(SecurityActionFilter.java:172) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$3(SecurityActionFilter.java:158) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$authenticateAsync$2(AuthenticationService.java:246) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$lookForExistingAuthentication$6(AuthenticationService.java:306) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lookForExistingAuthentication(AuthenticationService.java:317) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.authenticateAsync(AuthenticationService.java:244) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.access$000(AuthenticationService.java:196) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:139) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.applyInternal(SecurityActionFilter.java:155) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$apply$1(SecurityActionFilter.java:92) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.core.security.SecurityContext.executeAsUser(SecurityContext.java:96) [x-pack-core-7.5.2.jar:7.5.2]",
"at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:90) [x-pack-security-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:151) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:129) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:64) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:396) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.client.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:685) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.client.support.AbstractClient$ClusterAdmin.nodesStats(AbstractClient.java:781) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.cluster.InternalClusterInfoService.updateNodeStats(InternalClusterInfoService.java:248) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.cluster.InternalClusterInfoService.refresh(InternalClusterInfoService.java:289) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.cluster.InternalClusterInfoService.maybeRefresh(InternalClusterInfoService.java:269) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:703) [elasticsearch-7.5.2.jar:7.5.2]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]",
"at java.lang.Thread.run(Thread.java:830) [?:?]"] }

Thanks,
Kasim Shaik

I would first upgrade the kernel. CentOS 7 Linux kernels are known to have some issues. Maybe they have been resolved in recent releases.

@michael.morello, thanks for responding to my request. Sure i will upgrade kernel. Do you suggest any recommendations for javaopts since i have 250 gb RAM on each worker node.

Hi @michael.morello,

I upgraded CentOS kernel version to 3.10.0-1062.el7.x86_64, but still I am seeing crashloopbackoff for elasticsearch pods. Could you guide me how to fix this error.

[root@cesiumk8s-elk4 ~]# uname -r
3.10.0-1062.el7.x86_64
[root@cesiumk8s-elk4 ~]# uname -a
Linux cesiumk8s-elk4.xxx.com 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@cesiumk8s-elk4 ~]#

Hi Team,

I noticed following error message for one of elasticsearch pod in my cluster. What could have gone wrong with it.

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007fd9997a3c5a (sent by kill), pid=1, tid=342
#
# JRE version: OpenJDK Runtime Environment (13.0.1+9) (build 13.0.1+9)
# Java VM: OpenJDK 64-Bit Server VM (13.0.1+9, mixed mode, sharing, tiered, compressed oops, concurrent mark sweep gc, linux-amd64)
# Problematic frame:
# J 6179 c1 io.netty.channel.nio.AbstractNioChannel.eventLoop()Lio/netty/channel/nio/NioEventLoop; (8 bytes) @ 0x00007fd9997a3c5a [0x00007fd9997a3a60+0x00000000000001fa]
#
# Core dump will be written. Default location: /usr/share/elasticsearch/core.1
#
# An error report file with more information is saved as:
# logs/hs_err_pid1.log
Compiled method (c1)   15243 6179       3       io.netty.channel.nio.AbstractNioChannel::eventLoop (8 bytes)
 total in heap  [0x00007fd9997a3890,0x00007fd9997a3f48] = 1720
 relocation     [0x00007fd9997a39f0,0x00007fd9997a3a48] = 88
 main code      [0x00007fd9997a3a60,0x00007fd9997a3dc0] = 864
 stub code      [0x00007fd9997a3dc0,0x00007fd9997a3e68] = 168
 oops           [0x00007fd9997a3e68,0x00007fd9997a3e70] = 8
 metadata       [0x00007fd9997a3e70,0x00007fd9997a3e80] = 16
 scopes data    [0x00007fd9997a3e80,0x00007fd9997a3ec0] = 64
 scopes pcs     [0x00007fd9997a3ec0,0x00007fd9997a3f40] = 128
 dependencies   [0x00007fd9997a3f40,0x00007fd9997a3f48] = 8
#
# If you would like to submit a bug report, please visit:
#   https://github.com/AdoptOpenJDK/openjdk-build/issues
#

[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]


[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007fd9b80d9b77]

[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]
[Too many errors, abort]


This is still an old kernel. Last version is 3.10.0-1062.18.1.el7
According to https://access.redhat.com/errata/RHSA-2019:3055 a leak when using kmem control group has been fixed in 3.10.0-1062.4.1.el7. (not 100% sure it's the one one you are hitting though)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.