How to reduce elasticsearch client node memory usage

I have a client node.
Node.master: false
Node.data: false

I allocated memory to jvm:
-Xms10g
-Xmx10g

I use the bulk api to insert data for a long time.
The elasticsearch client node runs for about 10 minutes, and the memory usage becomes 100%. It does not decrease.


After a while, the clinet node crashes.

java.lang.OutOfMemoryError: Java heap space
Dumping heap to data/java_pid3851.hprof ...
Heap dump file created [11703693630 bytes in 97.763 secs]
[2019-11-15T16:34:56,625][ERROR][o.e.ExceptionsHelper     ] [es] fatal error
	at org.elasticsearch.ExceptionsHelper.lambda$maybeDieOnAnotherThread$4(ExceptionsHelper.java:300)
	at java.base/java.util.Optional.ifPresent(Optional.java:176)
	at org.elasticsearch.ExceptionsHelper.maybeDieOnAnotherThread(ExceptionsHelper.java:290)
	at org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$addPromise$1(Netty4TcpChannel.java:88)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:500)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:474)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:413)
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:538)
	at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:531)

Plugins installed: [none]

JVM version (java -version):
JVM: 13.0.1 ES: 7.4.2

OS version (uname -a if on a Unix-like system):
Linux es 3.10.0-1062.4.1.el7.x86_64 #1 SMP Fri Oct 18 17:15:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

elasticsearch log:

[2019-11-15T16:20:33,317][WARN ][o.e.m.j.JvmGcMonitorService] [es] [gc][2331] overhead, spent [1.4s] collecting in the last [2.3s]
[2019-11-15T16:20:40,712][WARN ][o.e.m.j.JvmGcMonitorService] [es] [gc][2337] overhead, spent [1.7s] collecting in the last [2.3s]
[2019-11-15T16:20:48,607][WARN ][o.e.m.j.JvmGcMonitorService] [es] [gc][2344] overhead, spent [1.5s] collecting in the last [1.8s]
[2019-11-15T16:20:55,414][WARN ][o.e.m.j.JvmGcMonitorService] [es] [gc][2350] overhead, spent [1.6s] collecting in the last [1.8s]
[2019-11-15T16:21:04,547][WARN ][o.e.m.j.JvmGcMonitorService] [es] [gc][2358] overhead, spent [1.7s] collecting in the last [2.1s]
[2019-11-15T16:21:10,891][WARN ][o.e.m.j.JvmGcMonitorService] [es] [gc][2363] overhead, spent [1.6s] collecting in the last [2.3s]
[2019-11-15T16:21:37,683][WARN ][o.e.m.j.JvmGcMonitorService] [es] [gc][2389] overhead, spent [1.6s] collecting in the last [1.6s]
[2019-11-15T16:21:48,849][WARN ][o.e.m.j.JvmGcMonitorService] [es] [gc][2399] overhead, spent [1.5s] collecting in the last [2.1s]
[2019-11-15T16:22:00,279][WARN ][o.e.m.j.JvmGcMonitorService] [es] [gc][2410] overhead, spent [1.3s] collecting in the last [1.3s]

Elasticseach go sdk client A lot of the following errors:

error:map[bytes_limit:1.0200547328e+10 bytes_wanted:1.0648510022e+10 durability:TRANSIENT
reason:[parent] Data too large, data for [<http_request>] would be [10648510022/9.9gb], which is 
larger than the limit of [10200547328/9.5gb], real usage: [10638953944/9.9gb], new bytes 
reserved: [9556078/9.1mb], usages [request=6040780800/5.6gb, fielddata=0/0b, 
in_flight_requests=212138460/202.3mb, accounting=0/0b] root_cause:
[map[bytes_limit:1.0200547328e+10 bytes_wanted:1.0648510022e+10 durability:TRANSIENT 
reason:[parent] Data too large, data for [<http_request>] would be [10648510022/9.9gb], which is
larger than the limit of [10200547328/9.5gb], real usage: [10638953944/9.9gb], new bytes 
reserved: [9556078/9.1mb], usages [request=6040780800/5.6gb, fielddata=0/0b, 
in_flight_requests=212138460/202.3mb, accounting=0/0b] type:circuit_breaking_exception]] 
type:circuit_breaking_exception] status:429

I would recommend sending smaller bulk requests. A common recommendation is that each bulk request should not exceed around 5MB in size.