Hi!
With time, an error occurs more and more often on my server:
elasticsearch.exceptions.TransportError: TransportError(429, 'circuit_breaking_exception', '[parent] Data too large, data for [<transport_request>] would be [1977484092/1.8gb], which is larger than the limit of [1973865676/1.8gb], real usage: [1977477480/1.8gb], new bytes reserved: [6612/6.4kb], usages [request=80/80b, fielddata=1221814/1.1mb, in_flight_requests=6612/6.4kb, accounting=618490711/589.8mb]')
-- # java -version
openjdk version "11.0.6" 2020-01-14
OpenJDK Runtime Environment (build 11.0.6+10-post-Ubuntu-1ubuntu118.04.1)
OpenJDK 64-Bit Server VM (build 11.0.6+10-post-Ubuntu-1ubuntu118.04.1, mixed mode, sharing)
Unfortunately, I only have 4 GB of RAM per node.
In jvm.options, I'm set:
-Xms2g
-Xmx2g-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-Djava.io.tmpdir=${ES_TMPDIR}
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/lib/elasticsearch
-XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:/var/log/elasticsearch/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m
9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m
I don't use queries that return a lot of data.
However, I often use scroll (scroll on a large amount of data with sorting by field).
Is there any way to minimize the risk of this error?
I understand it's a matter of a small amount of RAM?
There is some information about this error on the web.
But what best practices are recommended here?