What does this error mean - Data too large, data for [<transport_request>]

@HenningAndersen, I've tried 30G, 50G, now 60G - I'am still getting this error. It's rather annoying because some kibana/minotoring indexes lost their replica shards and cluster goes YELLOW.
And sorry, I still don't understand what really this error about.

Data too large, data for [<transport_request>] would be [49.3gb], which is larger than the limit of [47.5gb], real usage: [49.3gb], new bytes reserved: [2896/2.8kb]

Which data is too large? What is the transport_request ? Unfortunately I can't find information about this. Could you, please, explain this, or tell me where I can read about it.

Here is my jvm.options:

-Xms60g
-Xmx60g
10:-XX:-UseConcMarkSweepGC
10:-XX:-UseCMSInitiatingOccupancyOnly
10:-XX:+UseG1GC
10:-XX:InitiatingHeapOccupancyPercent=30
10:-XX:G1ReservePercent=25
10:-XX:MaxGCPauseMillis=400
10:-XX:+ParallelRefProcEnabled
10:-verbosegc
-Des.networkaddress.cache.ttl=60
-Des.networkaddress.cache.negative.ttl=10
-XX:+AlwaysPreTouch
-Xss20m
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djna.nosys=true
-XX:-OmitStackTraceInFastThrow
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Djava.io.tmpdir=${ES_TMPDIR}
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=data
-XX:ErrorFile=logs/hs_err_pid%p.log

1 Like