Elasticsearch Serviceで不明なメモリ圧迫

Elasticsearch Service上に立てているインスタンスへ大量の検索を行っていないにもかかわらず、
JVM Memory Pressureが80%,99%になるなどして、インスタンス自体が停止してしまうのですが
どこから見ていけばよいでしょうか。また何が原因なのでしょうか。
以下、WARNログ

[instance-0000000005] [apm-7.4.1-error-000122][0] marking and sending shard failed due to [failed recovery] org.elasticsearch.indices.recovery.RecoveryFailedException

これだけの情報からでは原因の特定が難しいです。
データに対してJavaのheapが少なすぎるか、データフィードの負荷が高すぎるなど、可能性がいろいろあります。

まずは、以下の動画にて設計や設定についてのベストプラクティスの話をしているので、この動画を見て、設計上問題がないか確認するのが良いと思います。

回答ありがとうございます。
indexに登録しているデータが40kb程度しかないので、javaのheapが少なすぎるということはないと思います。(1GB RAMの設定にしています。)
また、Elasticsearch、kibanaのメモリやストレージ量を今回WARNINGを吐いているインスタンスと同じにしている別インスタンスでは、特に同様の事象が発生していないので余計に単純にインスタンスのメモリ量を増やせば良いのかどうかもわからないという状態です。

ちなみに、ESのバージョンは、いくつですか?
あと、問題と思われるログの一部ではなく、exceptionを含む全体を貼ってください。

返信が遅れてすみません。
ESのバージョンは7.4.1です。
以下にログを貼ります。(めちゃくちゃ長いです。)
インスタンスの状態を確認した時にインスタンスがforce startしていて(一度起動した後に1ヶ月ほど放置していたので、特に何か操作を行ってforce startをしていたわけではないです。)、
snapshotも止まっており、JVMメモリが80%圧迫されていました。
restartをレコメンドされたため、restartしたところ以下のログが出たという状況です。

[instance-0000000007] failing shard [failed shard, shard [apm-7.4.1-span-000220][0], node[Ze2d_-UqTHGSkDwCTy5jSA], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=iFMoRIOVTomQk1tutry9gg], unassigned_info[[reason=MANUAL_ALLOCATION], at[2020-06-04T07:31:34.142Z], delayed=false, details[failed shard on node [Ze2d_-UqTHGSkDwCTy5jSA]: failed recovery, failure RecoveryFailedException[[apm-7.4.1-span-000220][0]: Recovery failed from {instance-0000000005}{mllz1SCYQ56TTXNCVjz1xg}{x6dj819tTcGpWWLd-LenLw}{10.46.32.111}{10.46.32.111:19348}{dim}{logical_availability_zone=zone-1, server_name=instance-0000000005.2bbbae81d213405dad0a72515dc00fb3, availability_zone=asia-northeast1-a, xpack.installed=true, region=unknown-region, instance_configuration=gcp.data.highio.1} into {instance-0000000007}{Ze2d_-UqTHGSkDwCTy5jSA}{O_NApsw1R--8TLgVLgkxIQ}{10.46.32.98}{10.46.32.98:19637}{dim}{logical_availability_zone=zone-0, server_name=instance-0000000007.2bbbae81d213405dad0a72515dc00fb3, availability_zone=asia-northeast1-b, xpack.installed=true, instance_configuration=gcp.data.highio.1, region=unknown-region}]; nested: RemoteTransportException[[instance-0000000005][172.17.0.7:19348][internal:index/shard/recovery/start_recovery]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [396196036/377.8mb], which is larger than the limit of [394910105/376.6mb], real usage: [396195488/377.8mb], new bytes reserved: [548/548b], usages [request=82032/80.1kb, fielddata=626/626b, in_flight_requests=548/548b, accounting=67296/65.7kb]]; ], allocation_status[no_attempt]], message [failed recovery], failure [RecoveryFailedException[[apm-7.4.1-span-000220][0]: Recovery failed from {instance-0000000005}{mllz1SCYQ56TTXNCVjz1xg}{x6dj819tTcGpWWLd-LenLw}{10.46.32.111}{10.46.32.111:19348}{dim}{logical_availability_zone=zone-1, server_name=instance-0000000005.2bbbae81d213405dad0a72515dc00fb3, availability_zone=asia-northeast1-a, xpack.installed=true, region=unknown-region, instance_configuration=gcp.data.highio.1} into {instance-0000000007}{Ze2d_-UqTHGSkDwCTy5jSA}{O_NApsw1R--8TLgVLgkxIQ}{10.46.32.98}{10.46.32.98:19637}{dim}{logical_availability_zone=zone-0, server_name=instance-0000000007.2bbbae81d213405dad0a72515dc00fb3, availability_zone=asia-northeast1-b, xpack.installed=true, instance_configuration=gcp.data.highio.1, region=unknown-region}]; nested: RemoteTransportException[[instance-0000000005][172.17.0.7:19348][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[1] prepare target for translog failed]; nested: RemoteTransportException[[instance-0000000007][172.17.0.5:19637][internal:index/shard/recovery/prepare_translog]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [401691300/383mb], which is larger than the limit of [394910105/376.6mb], real usage: [401690832/383mb], new bytes reserved: [468/468b], usages [request=65592/64kb, fielddata=626/626b, in_flight_requests=468/468b, accounting=67296/65.7kb]]; ], markAsStale [true]] org.elasticsearch.indices.recovery.RecoveryFailedException: [apm-7.4.1-span-000220][0]: Recovery failed from {instance-0000000005}{mllz1SCYQ56TTXNCVjz1xg}{x6dj819tTcGpWWLd-LenLw}{10.46.32.111}{10.46.32.111:19348}{dim}{logical_availability_zone=zone-1, server_name=instance-0000000005.2bbbae81d213405dad0a72515dc00fb3, availability_zone=asia-northeast1-a, xpack.installed=true, region=unknown-region, instance_configuration=gcp.data.highio.1} into {instance-0000000007}{Ze2d_-UqTHGSkDwCTy5jSA}{O_NApsw1R--8TLgVLgkxIQ}{10.46.32.98}{10.46.32.98:19637}{dim}{logical_availability_zone=zone-0, server_name=instance-0000000007.2bbbae81d213405dad0a72515dc00fb3, availability_zone=asia-northeast1-b, xpack.installed=true, instance_configuration=gcp.data.highio.1, region=unknown-region} at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.lambda$doRecovery$2(PeerRecoveryTargetService.java:245) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$1.handleException(PeerRecoveryTargetService.java:290) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.PlainTransportFuture.handleException(PlainTransportFuture.java:97) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1120) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1120) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:243) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:703) ~[elasticsearch-7.4.1.jar:7.4.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:830) [?:?] Caused by: org.elasticsearch.transport.RemoteTransportException: [instance-0000000005][172.17.0.7:19348][internal:index/shard/recovery/start_recovery] Caused by: org.elasticsearch.index.engine.RecoveryEngineException: Phase[1] prepare target for translog failed at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$prepareTargetForTranslog$34(RecoverySourceHandler.java:635) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:70) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:70) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.PlainTransportFuture.handleException(PlainTransportFuture.java:97) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1120) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1120) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:243) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:703) ~[elasticsearch-7.4.1.jar:7.4.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:830) ~[?:?] Caused by: org.elasticsearch.transport.RemoteTransportException: [instance-0000000007][172.17.0.5:19637][internal:index/shard/recovery/prepare_translog] Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [401691300/383mb], which is larger than the limit of [394910105/376.6mb], real usage: [401690832/383mb], new bytes reserved: [468/468b], usages [request=65592/64kb, fielddata=626/626b, in_flight_requests=468/468b, accounting=67296/65.7kb] at org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.checkParentLimit(HierarchyCircuitBreakerService.java:343) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:128) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:170) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:118) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:102) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:663) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:62) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:328) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:302) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[?:?] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[?:?] at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1475) ~[?:?] at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1224) ~[?:?] at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1271) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:505) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:283) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[?:?] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1421) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[?:?] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) ~[?:?] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[?:?] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:697) ~[?:?] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:597) ~[?:?] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:551) ~[?:?] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:511) ~[?:?] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918) ~[?:?] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?] at java.lang.Thread.run(Thread.java:830) ~[?:?]

ログ続きです。

[instance-0000000005] unexpected failure while sending request [internal:cluster/shard/failure] to [{instance-0000000007}{Ze2d_-UqTHGSkDwCTy5jSA}{O_NApsw1R--8TLgVLgkxIQ}{10.46.32.98}{10.46.32.98:19637}{dim}{logical_availability_zone=zone-0, server_name=instance-0000000007.2bbbae81d213405dad0a72515dc00fb3, availability_zone=asia-northeast1-b, xpack.installed=true, region=unknown-region, instance_configuration=gcp.data.highio.1}] for shard entry [shard id [[apm-7.4.1-error-000114][0]], allocation id [RfM6s4xTSTW-cRgPjm_now], primary term [0], message [master {instance-0000000007}{Ze2d_-UqTHGSkDwCTy5jSA}{O_NApsw1R--8TLgVLgkxIQ}{10.46.32.98}{10.46.32.98:19637}{dim}{logical_availability_zone=zone-0, server_name=instance-0000000007.2bbbae81d213405dad0a72515dc00fb3, availability_zone=asia-northeast1-b, xpack.installed=true, region=unknown-region, instance_configuration=gcp.data.highio.1} has not removed previously failed shard. resending shard failure], markAsStale [true]] org.elasticsearch.transport.RemoteTransportException: [instance-0000000007][172.17.0.5:19637][internal:cluster/shard/failure] Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [403980718/385.2mb], which is larger than the limit of [394910105/376.6mb], real usage: [403979344/385.2mb], new bytes reserved: [1374/1.3kb], usages [request=49152/48kb, fielddata=626/626b, in_flight_requests=34254/33.4kb, accounting=67296/65.7kb] at org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.checkParentLimit(HierarchyCircuitBreakerService.java:343) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:128) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:170) [elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:118) [elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:102) [elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:663) [elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:62) [transport-netty4-client-7.4.1.jar:7.4.1] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:328) [netty-codec-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:302) [netty-codec-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1475) [netty-handler-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1224) [netty-handler-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1271) [netty-handler-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:505) [netty-codec-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444) [netty-codec-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:283) [netty-codec-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1421) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:697) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:597) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:551) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:511) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918) [netty-common-4.1.38.Final.jar:4.1.38.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.38.Final.jar:4.1.38.Final] at java.lang.Thread.run(Thread.java:830) [?:?]

これはElastic Cloudですか?
あと、エラーはParent CircuitBreakingException なので、メモリをあげれば少なくとも問題は解消する可能性が高いです。

回答ありがとうございます。
失礼しました、Elastic Cloud(Elasticsearch Service)上でのエラーであることを記載しておりませんでした。
回答にあったようにメモリを上げてConfigに変更を入れましたが、この変更自体が失敗します。
以下ログです。
deploymentのステータスを表示している画像も添付します。

[instance-0000000005] failed to join {instance-0000000007}{Ze2d_-UqTHGSkDwCTy5jSA}{dkW2xw1vSPSSTEU7l-HfDw}{10.46.32.98}{10.46.32.98:19637}{dim}{logical_availability_zone=zone-0, server_name=instance-0000000007.2bbbae81d213405dad0a72515dc00fb3, availability_zone=asia-northeast1-b, xpack.installed=true, region=unknown-region, instance_configuration=gcp.data.highio.1} with JoinRequest{sourceNode={instance-0000000005}{mllz1SCYQ56TTXNCVjz1xg}{GOVihiy0SJqMVHigQIfLCA}{10.46.32.111}{10.46.32.111:19348}{dim}{logical_availability_zone=zone-1, server_name=instance-0000000005.2bbbae81d213405dad0a72515dc00fb3, availability_zone=asia-northeast1-a, xpack.installed=true, instance_configuration=gcp.data.highio.1, region=unknown-region}, optionalJoin=Optional[Join{term=28, lastAcceptedTerm=26, lastAcceptedVersion=534971, sourceNode={instance-0000000005}{mllz1SCYQ56TTXNCVjz1xg}{GOVihiy0SJqMVHigQIfLCA}{10.46.32.111}{10.46.32.111:19348}{dim}{logical_availability_zone=zone-1, server_name=instance-0000000005.2bbbae81d213405dad0a72515dc00fb3, availability_zone=asia-northeast1-a, xpack.installed=true, instance_configuration=gcp.data.highio.1, region=unknown-region}, targetNode={instance-0000000007}{Ze2d_-UqTHGSkDwCTy5jSA}{dkW2xw1vSPSSTEU7l-HfDw}{10.46.32.98}{10.46.32.98:19637}{dim}{logical_availability_zone=zone-0, server_name=instance-0000000007.2bbbae81d213405dad0a72515dc00fb3, availability_zone=asia-northeast1-b, xpack.installed=true, region=unknown-region, instance_configuration=gcp.data.highio.1}}]} org.elasticsearch.transport.RemoteTransportException: [instance-0000000007][172.17.0.5:19637][internal:cluster/coordination/join] Caused by: java.lang.IllegalStateException: failure when sending a validation request to node at org.elasticsearch.cluster.coordination.Coordinator$2.onFailure(Coordinator.java:513) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1120) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:243) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:703) ~[elasticsearch-7.4.1.jar:7.4.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:830) [?:?] Caused by: org.elasticsearch.transport.RemoteTransportException: [instance-0000000005][172.17.0.7:19348][internal:cluster/coordination/join/validate] Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [415188700/395.9mb], which is larger than the limit of [394910105/376.6mb], real usage: [404474744/385.7mb], new bytes reserved: [10713956/10.2mb], usages [request=0/0b, fielddata=642/642b, in_flight_requests=10713956/10.2mb, accounting=26437/25.8kb] at org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.checkParentLimit(HierarchyCircuitBreakerService.java:343) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:128) ~[elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:170) [elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:118) [elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:102) [elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:663) [elasticsearch-7.4.1.jar:7.4.1] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:62) [transport-netty4-client-7.4.1.jar:7.4.1] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:328) [netty-codec-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:302) [netty-codec-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1475) [netty-handler-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1224) [netty-handler-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1271) [netty-handler-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:505) [netty-codec-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444) [netty-codec-4.1.38.Final.jar:4.1.38.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:283) [netty-codec-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1421) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:697) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:597) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:551) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:511) [netty-transport-4.1.38.Final.jar:4.1.38.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918) [netty-common-4.1.38.Final.jar:4.1.38.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.38.Final.jar:4.1.38.Final] at java.lang.Thread.run(Thread.java:830) ~[?:?]

Elastic Cloudなら、新しいクラスターを作り直してデータを再投入するか、サポートに問い合わせてみるのが良いと思います。
また、既存のノードのバックアップが自動的に取られているので、新しいクラスターを作るときにバックアップからのリストアをすれば、データも復旧されます。

回答ありがとうございます。
Elastic Cloudのサポートに問い合わせを行ったところ、
割り当てたメモリに対して大量のシャードが配置されているのが原因であるとの回答をもらいました。
ですが、こちらでindex作成時にシャードの数をいくつと決めて作成したわけではないので、
大量にシャードが生成される原因がわかりません。
何かしら設定を行わないとシャードが勝手に増えていくのでしょうか?
また、mappingを行う際のシャードやノードの数等に関するベストプラクティスやドキュメント等が
あれば教えていただきたいです。

Devtoolから GET _cat/indices でインデックス数は確認できますよ。
また、設定については、既に回答済みの動画を見るのがいいと思います。

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.