G1GC cause CircuitBreakingException: [parent] Data too large on 7.1.1

@HenningAndersen

After

  • ps -ef
irteam   135641      1 15  9월04 ?      03:12:21 /home1/irteam/apps/openjdk/bin/java -XX:+UseG1GC -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-13022452245487839244 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -Djava.locale.providers=COMPAT -Xms12g -Xmx12g -Xlog:gc*,gc+age=trace,safepoint:file=logs/log/gc.log/log.gc.log:utctime,pid,tags:filecount=16,filesize=100m -Dio.netty.allocator.type=pooled -XX:MaxDirectMemorySize=6442450944 -Des.path.home=#[PATH_HOME] -Des.path.conf=#[PATH_CONFIG] -Des.distribution.flavor=default -Des.distribution.type=tar -Des.bundled_jdk=true -cp #[PATH_LIB] org.elasticsearch.bootstrap.Elasticsearch -d -p #[PATH_PID] -E http.port=#[PORT_HTTP] -E transport.tcp.port=#[PORT_TCP]

I tried to remove -XX:InitiatingHeapOccupancyPercent=75.

  • ES Log
[2019-09-05T09:01:34,119][WARN ][o.e.i.c.IndicesClusterStateService] [#[MONITORING_NODE_NAME]] [.monitoring-kibana-7-2019.08.17][0] marking and sending shard failed due to [failed recovery]
org.elasticsearch.indices.recovery.RecoveryFailedException: [.monitoring-kibana-7-2019.08.17][0]: Recovery failed from {#[MONITORING_NODE_NAME]}{n370gtNGT7-4Yj-RuYaT3Q}{gEq8SSXoRnaf6V1j6suM-g}{#[MONITORING_NODE_IP]}{#[MONITORING_NODE_IP:PORT]}{xpack.installed=true} into {#[MONITORING_NODE_NAME]}{iF1tt8P6QnGpDfgcn0n6Ow}{laJ9SNBDTfeRwi_-YOyAsA}{#[MONITORING_NODE_IP]}{#[MONITORING_NODE_IP:PORT]}{xpack.installed=true}
        at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.lambda$doRecovery$2(PeerRecoveryTargetService.java:249) [elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$1.handleException(PeerRecoveryTargetService.java:294) [elasticsearch-7.2.1.jar:7.2.1]
        ...
Caused by: org.elasticsearch.transport.RemoteTransportException: [#[MONITORING_NODE_NAME]][#[MONITORING_NODE_IP:PORT]][internal:index/shard/recovery/start_recovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException: Phase[1] phase1 failed
        at org.elasticsearch.indices.recovery.RecoverySourceHandler.recoverToTarget(RecoverySourceHandler.java:182) ~[elasticsearch-7.2.1.jar:7.2.1]
        ...
Caused by: org.elasticsearch.indices.recovery.RecoverFilesRecoveryException: Failed to transfer [13] files with total size of [4mb]
		...
Caused by: org.elasticsearch.transport.RemoteTransportException: [#[MONITORING_NODE_NAME]][#[MONITORING_NODE_IP:PORT]][internal:index/shard/recovery/filesInfo]
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [12858406148/11.9gb], which is larger than the limit of [12240656793/11.3gb], real usage: [12858405640/11.9gb], new bytes reserved: [508/508b]
        at org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.checkParentLimit(HierarchyCircuitBreakerService.java:343) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:128) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:173) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:121) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:105) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:660) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:62) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[?:?]
        at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) ~[?:?]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:582) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:536) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) ~[?:?]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
        at java.lang.Thread.run(Thread.java:834) ~[?:?]

There are many another similar log, 'CircuitBreakingException: [parent] Data too large'. too.

It still happens CircuitBreakingException: [parent] Data too large Errors.
Do you have any other advice?