Action [cluster:monitor/nodes/stats[n]] timed out

I can't take the log file out.
Logs says:

Received response for a request that as timed out, sent [xxx/xxx] ago.....

tells that a timeout occurred when the master tries to collecting node stats info from one particular data node, and the time waited for response increased from 10s to about 20min during about 1.5hour.

In the meanwhile, the data node worked all fine, and the cluster remains green, and all read/write actions run perfectly. I had to restart the data node to fix this, and it worked.

I‘m using Elasticsearch 7.17.1, CentOS 7.5. My cluster has 56 data nodes, and this happened to only one of them

Anyone encountered this kind of situation before?
Is this a bug?

Welcome to our community! :smiley:

It'd be useful if you could share more of the log from the master node. Otherwise we are really only guessing as to what might be happening.

I tried to typed the first few lines in here (there may be some unimportant mistakes, <?> means I don't know what it is, looks like some ID string)

[2022-08-24T17:15:32,221][ERROR][o.e.x.m.c.i.IndexStatsCollector][<master_node_name>] collector [index-stats] timed out when collecting data: node [<data_node_id>] did not respond with in [10s]
[2022-08-24T17:15:33,607][WARN][o.e.t.TransportService][<master_node_name>] Received response for a request that has timed out, send [11.4s/11409ms] ago, timed out [1.4s/1401ms] ago, action [indices: monitor/stats[n]], node [{<data_node_name>}{<data_node_id>}{<?>}{<data_node_ip_addr>}{<data_node_ip_addr>:<transport_port>}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, xpack.installed=true, box_type=hot, ml.max_jvm_size=32212254720, transform.node=true}], id [623991677]
[2022-08-24T17:15:42,270][ERROR][o.e.x.m.c.c.ClusterStatsCollector][<master_node_name>] collector [cluster_stats] timed out when collecting data: node [<data_node_id>] did not respond with in [10s]
[2022-08-24T17:15:43,965][WARN][o.e.t.TransportService][<master_node_name>] Received response for a request that has timed out, send [11.8s/11808ms] ago, timed out [1.8s/1801ms] ago, action [cluster: monitor/stats[n]], node [{<data_node_name>}{<data_node_id>}{<?>}{<data_node_ip_addr>}{<data_node_ip_addr>:<transport_port>}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, xpack.installed=true, box_type=hot, ml.max_jvm_size=32212254720, transform.node=true}], id [623991699]
[2022-08-24T17:16:22,641][ERROR][o.e.x.m.c.i.IndexRecoveryCollector][<master_node_name>] collector [index_recovery] timed out when collecting data: node [<data_node_id>] did not respond with in [10s]
[2022-08-24T17:16:25,240][WARN][o.e.t.TransportService][<master_node_name>] Received response for a request that has timed out, send [12.6s/12610ms] ago, timed out [2.6s/26.3ms] ago, action [indices: monitor/recovery[n]], node [{<data_node_name>}{<data_node_id>}{<?>}{<data_node_ip_addr>}{<data_node_ip_addr>:<transport_port>}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, xpack.installed=true, box_type=hot, ml.max_jvm_size=32212254720, transform.node=true}], id [623996514]
[2022-08-24T17:16:32,657][ERROR][o.e.x.m.c.i.IndexStatsCollector][<master_node_name>] collector [index-stats] timed out when collecting data: node [<data_node_id>] did not respond with in [10s]
[2022-08-24T17:16:37,848][WARN][o.e.t.TransportService][<master_node_name>] Received response for a request that has timed out, send [15.2s/15213ms] ago, timed out [5.2s/5204ms] ago, action [indices: monitor/stats[n]], node [{<data_node_name>}{<data_node_id>}{<?>}{<data_node_ip_addr>}{<data_node_ip_addr>:<transport_port>}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, xpack.installed=true, box_type=hot, ml.max_jvm_size=32212254720, transform.node=true}], id [623997430]
[2022-08-24T17:16:42,270][ERROR][o.e.x.m.c.c.ClusterStatsCollector][<master_node_name>] collector [cluster_stats] timed out when collecting data: node [<data_node_id>] did not respond with in [10s]
[2022-08-24T17:16:48,965][WARN][o.e.t.TransportService][<master_node_name>] Received response for a request that has timed out, send [15.8s/15814ms] ago, timed out [5.8s/5805ms] ago, action [cluster: monitor/stats[n]], node [{<data_node_name>}{<data_node_id>}{<?>}{<data_node_ip_addr>}{<data_node_ip_addr>:<transport_port>}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, xpack.installed=true, box_type=hot, ml.max_jvm_size=32212254720, transform.node=true}], id [623998452]
[2022-08-24T17:16:57,673][WARN][o.e.c.InternalClusterInfoService][<master_node_name>] failed to retrieve stats for node [<data_node_id>]: [<data_node_name>] [<data_node_ip_addr>:<transport_port>][cluster:monitor/nodes/stats[n]] request id [(lost it ....)] timed out after [15014ms]
[2022-08-24T17:16:57,690][WARN][o.e.c.InternalClusterInfoService][<master_node_name>] failed to retrieve shard stats for node [<data_node_id>]: [<data_node_name>] [<data_node_ip_addr>:<transport_port>][cluster:monitor/stats[n]] request id [(lost it ....)] timed out after [15014ms]
[2022-08-24T17:16:57,788][WARN][o.e.t.TransportService][<master_node_name>] Received response for a request that has timed out, send [15.2s/15214ms] ago, timed out [200ms/200ms] ago, action [cluster: monitor/nodes/stats[n]], node [{<data_node_name>}{<data_node_id>}{<?>}{<data_node_ip_addr>}{<data_node_ip_addr>:<transport_port>}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, xpack.installed=true, box_type=hot, ml.max_jvm_size=32212254720, transform.node=true}], id [623999441]
[2022-08-24T17:16:57,800][WARN][o.e.t.TransportService][<master_node_name>] Received response for a request that has timed out, send [15.2s/15214ms] ago, timed out [200ms/200ms] ago, action [cluster: monitor/stats[n]], node [{<data_node_name>}{<data_node_id>}{<?>}{<data_node_ip_addr>}{<data_node_ip_addr>:<transport_port>}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, xpack.installed=true, box_type=hot, ml.max_jvm_size=32212254720, transform.node=true}], id [623999523]

……

After about 1.5 hour later, ERROR log not changed, the WARN is like "sent [19.2m/ms] ago", so the master node can still receive responses, just after a very long time.

Then I restarted the data node process, the new process has a new node_id (don't know why), so I removed the data directory of the old node.
Cluster seems OK now.

thx!

I found something in thread dump file: Some threads seem to have done nothing in 38 minutes,is that normal?

Thread dump 1:

"elasticsearch[<node_addr>-hotData2][clusterApplierService#updateTask][T#1]" #52 daemon prio=5 os_prio=0 cpu=4435241.83ms elapsed=6794012.45s tid=0x00007f3993e8cd90 nid=0x8657 waiting on condition  [0x00007f37838f7000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x0000000080007bc8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(java.base@17.0.2/AbstractQueuedSynchronizer.java:506)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@17.0.2/AbstractQueuedSynchronizer.java:1623)
	at java.util.concurrent.PriorityBlockingQueue.take(java.base@17.0.2/PriorityBlockingQueue.java:535)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

……

"elasticsearch[<node_addr>-hotData2][AsyncLucenePersistedState#updateTask][T#1]" #92 daemon prio=5 os_prio=0 cpu=335794.33ms elapsed=6794012.33s tid=0x00007f3768005d80 nid=0x867f waiting on condition  [0x00007f37810cf000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x0000000080020448> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

……

"elasticsearch[keepAlive/7.17.1]" #29 prio=5 os_prio=0 cpu=0.26ms elapsed=6794011.34s tid=0x00007f3993ec08a0 nid=0x86b2 waiting on condition  [0x00007f3681fde000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x000000008000fad0> (a java.util.concurrent.CountDownLatch$Sync)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:211)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@17.0.2/AbstractQueuedSynchronizer.java:715)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@17.0.2/AbstractQueuedSynchronizer.java:1047)
	at java.util.concurrent.CountDownLatch.await(java.base@17.0.2/CountDownLatch.java:230)
	at org.elasticsearch.bootstrap.Bootstrap$1.run(Bootstrap.java:85)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

……

"elasticsearch[<node_addr>-hotData2][DanglingIndices#updateTask][T#1]" #153 daemon prio=5 os_prio=0 cpu=23282.72ms elapsed=6793643.05s tid=0x00007f375c18ffd0 nid=0xde03 waiting on condition  [0x00007f36813fe000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x00000000866b0be0> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

……

"elasticsearch[<node_addr>-hotData2][system_critical_read][T#1]" #374 daemon prio=5 os_prio=0 cpu=127775.45ms elapsed=6789448.60s tid=0x00007f36a8106220 nid=0x7c6c waiting on condition  [0x00007f1598150000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x00000000804077e8> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

"elasticsearch[<node_addr>-hotData2][system_critical_read][T#4]" #376 daemon prio=5 os_prio=0 cpu=126774.22ms elapsed=6789448.60s tid=0x00007f36981054b0 nid=0x7c6d waiting on condition  [0x00007f0f28131000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x00000000804077e8> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

"elasticsearch[<node_addr>-hotData2][system_critical_read][T#2]" #377 daemon prio=5 os_prio=0 cpu=123308.96ms elapsed=6789448.60s tid=0x00007f369c104900 nid=0x7c6e waiting on condition  [0x00007f0f20147000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x00000000804077e8> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

"elasticsearch[<node_addr>-hotData2][system_critical_read][T#3]" #378 daemon prio=5 os_prio=0 cpu=125789.90ms elapsed=6789448.60s tid=0x00007f36a4106fd0 nid=0x7c6f waiting on condition  [0x00007f0e306da000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x00000000804077e8> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

"elasticsearch[<node_addr>-hotData2][system_critical_read][T#5]" #375 daemon prio=5 os_prio=0 cpu=128428.89ms elapsed=6789448.60s tid=0x00007f36a0105480 nid=0x7c70 waiting on condition  [0x00007f0e305d9000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x00000000804077e8> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

……

"elasticsearch[<node_addr>-hotData2][system_critical_write][T#1]" #41354 daemon prio=5 os_prio=0 cpu=186.60ms elapsed=6533976.33s tid=0x00007f37101071c0 nid=0x4c36 waiting on condition  [0x00007f349f6d5000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x0000000080402cf8> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None
(there are 5 of this system_critical_write waiting for the same object)
……

"elasticsearch[<node_addr>-hotData2][fetch_shard_store][T#7]" #1107030 daemon prio=5 os_prio=0 cpu=13.28ms elapsed=55031.40s tid=0x00007f3720110820 nid=0x6a74 waiting on condition  [0x00007f34a21e0000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x0000000080412ea8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

Thread dump 2 (about 38m later):

"elasticsearch[<node_addr>-hotData2][clusterApplierService#updateTask][T#1]" #52 daemon prio=5 os_prio=0 cpu=4435241.83ms elapsed=6796324.40s tid=0x00007f3993e8cd90 nid=0x8657 waiting on condition  [0x00007f37838f7000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x0000000080007bc8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(java.base@17.0.2/AbstractQueuedSynchronizer.java:506)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@17.0.2/AbstractQueuedSynchronizer.java:1623)
	at java.util.concurrent.PriorityBlockingQueue.take(java.base@17.0.2/PriorityBlockingQueue.java:535)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

……

"elasticsearch[<node_addr>-hotData2][AsyncLucenePersistedState#updateTask][T#1]" #92 daemon prio=5 os_prio=0 cpu=335794.33ms elapsed=6796324.28s tid=0x00007f3768005d80 nid=0x867f waiting on condition  [0x00007f37810cf000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x0000000080020448> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

……

"elasticsearch[keepAlive/7.17.1]" #29 prio=5 os_prio=0 cpu=0.26ms elapsed=6796323.29s tid=0x00007f3993ec08a0 nid=0x86b2 waiting on condition  [0x00007f3681fde000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x000000008000fad0> (a java.util.concurrent.CountDownLatch$Sync)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:211)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@17.0.2/AbstractQueuedSynchronizer.java:715)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@17.0.2/AbstractQueuedSynchronizer.java:1047)
	at java.util.concurrent.CountDownLatch.await(java.base@17.0.2/CountDownLatch.java:230)
	at org.elasticsearch.bootstrap.Bootstrap$1.run(Bootstrap.java:85)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

……

"elasticsearch[<node_addr>-hotData2][DanglingIndices#updateTask][T#1]" #153 daemon prio=5 os_prio=0 cpu=23282.72ms elapsed=6795955.01s tid=0x00007f375c18ffd0 nid=0xde03 waiting on condition  [0x00007f36813fe000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x00000000866b0be0> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

……

"elasticsearch[<node_addr>-hotData2][system_critical_read][T#1]" #374 daemon prio=5 os_prio=0 cpu=127775.45ms elapsed=6791760.55s tid=0x00007f36a8106220 nid=0x7c6c waiting on condition  [0x00007f1598150000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x00000000804077e8> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

"elasticsearch[<node_addr>-hotData2][system_critical_read][T#4]" #376 daemon prio=5 os_prio=0 cpu=126774.22ms elapsed=6791760.55s tid=0x00007f36981054b0 nid=0x7c6d waiting on condition  [0x00007f0f28131000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x00000000804077e8> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

"elasticsearch[<node_addr>-hotData2][system_critical_read][T#2]" #377 daemon prio=5 os_prio=0 cpu=123308.96ms elapsed=6791760.55s tid=0x00007f369c104900 nid=0x7c6e waiting on condition  [0x00007f0f20147000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x00000000804077e8> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

"elasticsearch[<node_addr>-hotData2][system_critical_read][T#3]" #378 daemon prio=5 os_prio=0 cpu=125789.90ms elapsed=6791760.55s tid=0x00007f36a4106fd0 nid=0x7c6f waiting on condition  [0x00007f0e306da000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x00000000804077e8> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

"elasticsearch[<node_addr>-hotData2][system_critical_read][T#5]" #375 daemon prio=5 os_prio=0 cpu=128428.89ms elapsed=6791760.55s tid=0x00007f36a0105480 nid=0x7c70 waiting on condition  [0x00007f0e305d9000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x00000000804077e8> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

……
"elasticsearch[<node_addr>-hotData2][system_critical_write][T#1]" #41354 daemon prio=5 os_prio=0 cpu=186.60ms elapsed=6536288.28s tid=0x00007f37101071c0 nid=0x4c36 waiting on condition  [0x00007f349f6d5000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x0000000080402cf8> (a java.util.concurrent.LinkedTransferQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:152)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None
(there are 5 of this system_critical_write waiting for the same object)
……

"elasticsearch[<node_addr>-hotData2][fetch_shard_store][T#7]" #1107030 daemon prio=5 os_prio=0 cpu=13.28ms elapsed=57343.35s tid=0x00007f3720110820 nid=0x6a74 waiting on condition  [0x00007f34a21e0000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@17.0.2/Native Method)
	- parking to wait for  <0x0000000080412ea8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
	at java.util.concurrent.locks.LockSupport.park(java.base@17.0.2/LockSupport.java:341)
	at java.util.concurrent.LinkedTransferQueue$Node.block(java.base@17.0.2/LinkedTransferQueue.java:470)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.2/ForkJoinPool.java:3463)
	at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.2/ForkJoinPool.java:3434)
	at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@17.0.2/LinkedTransferQueue.java:669)
	at java.util.concurrent.LinkedTransferQueue.xfer(java.base@17.0.2/LinkedTransferQueue.java:616)
	at java.util.concurrent.LinkedTransferQueue.take(java.base@17.0.2/LinkedTransferQueue.java:1286)
	at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.2/ThreadPoolExecutor.java:1062)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.2/ThreadPoolExecutor.java:1122)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.2/ThreadPoolExecutor.java:635)
	at java.lang.Thread.run(java.base@17.0.2/Thread.java:833)

   Locked ownable synchronizers:
	- None

My cluster had been running normally for more than 40 days before this happened for the first time。In the 12 days after that, it occurred four times, each time on a different node.

This has occurred on 8 of my data nodes randomly :sob:.
I still don't have any idea how to reproduce it or why it happens.

Once, before the data node completely fails to respond to the management requests, I used the cat thread_pool API and waited for a long time, then found that the management thread pool of this node has more than 190000 queued tasks to be executed, all other thread pools were acting normally.

LOGS (log level: info)

log of master node:

[2022-09-15T01:58:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] [master-node] triggering scheduled [ML] maintenance tasks
[2022-09-15T01:58:00,005][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [master-node] Deleting expired data
[2022-09-15T01:58:00,018][INFO ][o.e.x.m.j.r.UnusedStatsRemover] [master-node] Successfully deleted [0] unused stats documents
[2022-09-15T01:58:00,019][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [master-node] Completed deletion of expired ML data
[2022-09-15T01:58:00,019][INFO ][o.e.x.m.MlDailyMaintenanceService] [master-node] Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask
[2022-09-15T03:52:38,351][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [master-node] collector [index_recovery] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:52:39,966][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [11.6s/11608ms] ago, timed out [1.6s/1601ms] ago, action [indices:monitor/recovery[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [861890090]
[2022-09-15T03:52:48,362][ERROR][o.e.x.m.c.i.IndexStatsCollector] [master-node] collector [index-stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:52:51,709][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [13.4s/13409ms] ago, timed out [3.4s/3403ms] ago, action [indices:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [861891006]
[2022-09-15T03:52:58,552][ERROR][o.e.x.m.c.c.ClusterStatsCollector] [master-node] collector [cluster_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:53:03,215][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [14.6s/14611ms] ago, timed out [4.6s/4603ms] ago, action [cluster:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [861892078]
[2022-09-15T03:53:24,863][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve stats for node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][cluster:monitor/nodes/stats[n]] request_id [861894021] timed out after [15012ms]
[2022-09-15T03:53:24,873][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve shard stats from node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][indices:monitor/stats[n]] request_id [861894067] timed out after [15012ms]
[2022-09-15T03:53:24,980][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [15.2s/15212ms] ago, timed out [200ms/200ms] ago, action [cluster:monitor/nodes/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [861894021]
[2022-09-15T03:53:25,008][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [15.2s/15212ms] ago, timed out [200ms/200ms] ago, action [indices:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [861894067]
[2022-09-15T03:53:38,349][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [master-node] collector [index_recovery] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:53:43,511][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [15.2s/15212ms] ago, timed out [5.2s/5204ms] ago, action [indices:monitor/recovery[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [861895820]
[2022-09-15T03:53:48,360][ERROR][o.e.x.m.c.i.IndexStatsCollector] [master-node] collector [index-stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:53:55,240][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [16.8s/16813ms] ago, timed out [6.8s/6805ms] ago, action [indices:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [861896736]
[2022-09-15T03:53:58,413][ERROR][o.e.x.m.c.c.ClusterStatsCollector] [master-node] collector [cluster_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:54:04,998][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [16.6s/16613ms] ago, timed out [6.6s/6606ms] ago, action [cluster:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [861897799]
[2022-09-15T03:54:09,892][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve stats for node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][cluster:monitor/nodes/stats[n]] request_id [861898356] timed out after [15012ms]
[2022-09-15T03:54:09,911][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve shard stats from node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][indices:monitor/stats[n]] request_id [861898423] timed out after [15012ms]
[2022-09-15T03:54:11,126][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [16.2s/16214ms] ago, timed out [1.2s/1202ms] ago, action [cluster:monitor/nodes/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [861898356]
[2022-09-15T03:54:11,171][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [16.4s/16414ms] ago, timed out [1.4s/1402ms] ago, action [indices:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [861898423]
[2022-09-15T03:54:38,348][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [master-node] collector [index_recovery] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:54:45,412][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [17.2s/17213ms] ago, timed out [7.2s/7206ms] ago, action [indices:monitor/recovery[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [861901548]
[2022-09-15T03:54:48,356][ERROR][o.e.x.m.c.i.IndexStatsCollector] [master-node] collector [index-stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]

……

[2022-09-15T05:03:13,644][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve stats for node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][cluster:monitor/nodes/stats[n]] request_id [862291126] timed out after [15011ms]
[2022-09-15T05:03:13,654][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve shard stats from node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][indices:monitor/stats[n]] request_id [862291151] timed out after [15011ms]
[2022-09-15T05:03:27,880][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [9.8m/589656ms] ago, timed out [9.6m/579647ms] ago, action [indices:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862237911]
[2022-09-15T05:03:38,383][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [master-node] collector [index_recovery] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:03:40,321][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [9.8m/591857ms] ago, timed out [9.6m/581849ms] ago, action [cluster:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862238978]
[2022-09-15T05:03:48,394][ERROR][o.e.x.m.c.i.IndexStatsCollector] [master-node] collector [index-stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:03:52,902][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [9.9m/594857ms] ago, timed out [9.6m/579844ms] ago, action [cluster:monitor/nodes/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862239931]
[2022-09-15T05:03:52,939][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [9.9m/594857ms] ago, timed out [9.6m/579844ms] ago, action [indices:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862239956]
[2022-09-15T05:03:58,442][ERROR][o.e.x.m.c.c.ClusterStatsCollector] [master-node] collector [cluster_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:03:58,688][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve stats for node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][cluster:monitor/nodes/stats[n]] request_id [862295206] timed out after [15010ms]
[2022-09-15T05:03:58,700][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve shard stats from node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][indices:monitor/stats[n]] request_id [862295249] timed out after [15010ms]
[2022-09-15T05:04:33,674][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [10m/605261ms] ago, timed out [9.9m/595253ms] ago, action [indices:monitor/recovery[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862242643]
[2022-09-15T05:04:38,380][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [master-node] collector [index_recovery] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:04:43,733][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve stats for node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][cluster:monitor/nodes/stats[n]] request_id [862299514] timed out after [15012ms]
[2022-09-15T05:04:43,742][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve shard stats from node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][indices:monitor/stats[n]] request_id [862299551] timed out after [15012ms]
[2022-09-15T05:04:48,390][ERROR][o.e.x.m.c.i.IndexStatsCollector] [master-node] collector [index-stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:04:48,656][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [10.1m/610264ms] ago, timed out [10m/600256ms] ago, action [indices:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862243559]
[2022-09-15T05:04:53,516][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [10.1m/610463ms] ago, timed out [9.9m/595452ms] ago, action [cluster:monitor/nodes/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862244016]
[2022-09-15T05:04:53,529][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [10.1m/610463ms] ago, timed out [9.9m/595452ms] ago, action [indices:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862244054]
[2022-09-15T05:04:58,445][ERROR][o.e.x.m.c.c.ClusterStatsCollector] [master-node] collector [cluster_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:05:01,965][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [10.2m/613466ms] ago, timed out [10m/603458ms] ago, action [cluster:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862244737]
[2022-09-15T05:05:28,771][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve stats for node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][cluster:monitor/nodes/stats[n]] request_id [862303821] timed out after [15011ms]
[2022-09-15T05:05:28,779][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve shard stats from node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][indices:monitor/stats[n]] request_id [862303851] timed out after [15011ms]
[2022-09-15T05:05:38,382][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [master-node] collector [index_recovery] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:05:48,394][ERROR][o.e.x.m.c.i.IndexStatsCollector] [master-node] collector [index-stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:05:54,910][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [10.4m/626672ms] ago, timed out [10.1m/611661ms] ago, action [cluster:monitor/nodes/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862248240]
[2022-09-15T05:05:54,949][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [10.4m/626672ms] ago, timed out [10.1m/611661ms] ago, action [indices:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862248301]
[2022-09-15T05:05:55,218][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [10.4m/626872ms] ago, timed out [10.2m/616866ms] ago, action [indices:monitor/recovery[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862248401]
[2022-09-15T05:05:58,623][ERROR][o.e.x.m.c.c.ClusterStatsCollector] [master-node] collector [cluster_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]

……

[2022-09-15T05:31:06,373][WARN ][o.e.t.TransportService   ] [master-node] Transport response handler not found of id [862350586]
[2022-09-15T05:31:07,488][WARN ][o.e.t.TransportService   ] [master-node] Transport response handler not found of id [862350705]
[2022-09-15T05:31:07,525][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [17.6m/1058309ms] ago, timed out [17.3m/1043298ms] ago, action [indices:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862350743]
[2022-09-15T05:31:22,312][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [17.7m/1063912ms] ago, timed out [17.5m/1053905ms] ago, action [indices:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862351614]
[2022-09-15T05:31:36,417][WARN ][o.e.t.TransportService   ] [master-node] Received response for a request that has timed out, sent [17.7m/1067715ms] ago, timed out [17.6m/1057907ms] ago, action [cluster:monitor/stats[n]], node [{<datanode-ip>-hotData1}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{<datanode-ip>}{<datanode-ip>:9301}{cdfhilrstw}{ml.machine_memory=404122529792, ml.max_open_jobs=512, box_type=hot, xpack.installed=true, ml.max_jvm_size=32212254720, transform.node=true}], id [862352685]
[2022-09-15T05:31:38,392][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [master-node] collector [index_recovery] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:31:45,205][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve stats for node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][cluster:monitor/nodes/stats[n]] request_id [862453089] timed out after [15010ms]
[2022-09-15T05:31:45,211][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve shard stats from node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][indices:monitor/stats[n]] request_id [862453109] timed out after [15010ms]
[2022-09-15T05:31:48,402][ERROR][o.e.x.m.c.i.IndexStatsCollector] [master-node] collector [index-stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:31:58,459][ERROR][o.e.x.m.c.c.ClusterStatsCollector] [master-node] collector [cluster_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:32:12,941][WARN ][o.e.t.TransportService   ] [master-node] Transport response handler not found of id [862354981]
[2022-09-15T05:32:12,972][WARN ][o.e.t.TransportService   ] [master-node] Transport response handler not found of id [862355043]
[2022-09-15T05:32:30,234][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve stats for node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][cluster:monitor/nodes/stats[n]] request_id [862457363] timed out after [15011ms]
[2022-09-15T05:32:30,251][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve shard stats from node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][indices:monitor/stats[n]] request_id [862457426] timed out after [15011ms]
[2022-09-15T05:32:33,168][WARN ][o.e.t.TransportService   ] [master-node] Transport response handler not found of id [862356342]
[2022-09-15T05:32:38,397][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [master-node] collector [index_recovery] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:32:48,408][ERROR][o.e.x.m.c.i.IndexStatsCollector] [master-node] collector [index-stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:32:49,718][WARN ][o.e.t.TransportService   ] [master-node] Transport response handler not found of id [862357258]
[2022-09-15T05:32:58,460][ERROR][o.e.x.m.c.c.ClusterStatsCollector] [master-node] collector [cluster_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:33:03,197][WARN ][o.e.t.TransportService   ] [master-node] Transport response handler not found of id [862358325]
[2022-09-15T05:33:15,284][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve stats for node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][cluster:monitor/nodes/stats[n]] request_id [862461759] timed out after [15010ms]
[2022-09-15T05:33:15,292][WARN ][o.e.c.InternalClusterInfoService] [master-node] failed to retrieve shard stats from node [TjAYjkLwSz6O64RgnuOTtQ]: [<datanode-ip>-hotData1][<datanode-ip>:9301][indices:monitor/stats[n]] request_id [862461785] timed out after [15010ms]
[2022-09-15T05:33:19,183][WARN ][o.e.t.TransportService   ] [master-node] Transport response handler not found of id [862359370]
[2022-09-15T05:33:19,225][WARN ][o.e.t.TransportService   ] [master-node] Transport response handler not found of id [862359403]

……


log of unresponsive data node:

[2022-09-15T03:51:24,340][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:51:25,196][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [10.9s/10929ms] ago, timed out [800ms/800ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2338691848]
[2022-09-15T03:51:54,378][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:51:55,635][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [11.2s/11272ms] ago, timed out [1.2s/1201ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2338724537]
[2022-09-15T03:52:24,378][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:52:26,795][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [12.4s/12405ms] ago, timed out [2.4s/2401ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2338756636]
[2022-09-15T03:52:54,451][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:52:57,302][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [12.8s/12806ms] ago, timed out [2.8s/2801ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2338789640]
[2022-09-15T03:53:24,451][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:53:30,288][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [16s/16006ms] ago, timed out [6s/6002ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2338822000]
[2022-09-15T03:53:54,469][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:54:00,729][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [16.3s/16337ms] ago, timed out [6.2s/6204ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2338854078]
[2022-09-15T03:54:24,471][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T03:54:32,360][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [17.8s/17838ms] ago, timed out [7.8s/7803ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2338887260]
[2022-09-15T03:54:54,510][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]

……

[2022-09-15T05:03:26,891][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:03:37,730][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [9.8m/591288ms] ago, timed out [9.6m/581275ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2342609471]
[2022-09-15T05:03:56,925][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:04:18,475][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [10m/601951ms] ago, timed out [9.8m/591946ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2342639732]
[2022-09-15T05:04:26,925][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:04:56,969][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:04:59,375][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [10.2m/612866ms] ago, timed out [10m/602855ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2342670306]
[2022-09-15T05:05:26,970][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:05:38,801][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [10.3m/622252ms] ago, timed out [10.2m/612242ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2342700107]
[2022-09-15T05:05:56,982][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]

……

[2022-09-15T05:30:57,797][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:31:27,798][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:31:34,742][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [17.7m/1067415ms] ago, timed out [17.6m/1057387ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2343796732]
[2022-09-15T05:31:57,815][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:32:19,064][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [18m/1081932ms] ago, timed out [17.8m/1071923ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2343826258]
[2022-09-15T05:32:27,815][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:32:57,861][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:33:01,819][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [18.2m/1094625ms] ago, timed out [18m/1084496ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2343855789]
[2022-09-15T05:33:27,861][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:33:46,800][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [18.4m/1109699ms] ago, timed out [18.3m/1099691ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2343885047]
[2022-09-15T05:33:57,884][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:34:27,885][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]
[2022-09-15T05:34:33,175][WARN ][o.e.t.TransportService   ] [data-node] Received response for a request that has timed out, sent [18.7m/1125860ms] ago, timed out [18.5m/1115987ms] ago, action [cluster:monitor/nodes/stats[n]], node [{data-node}{TjAYjkLwSz6O64RgnuOTtQ}{jMDOUcAzQxadkHYcq4re8w}{data-node-ip}{data-node-ip:9301}{cdfhilrstw}{ml.machine_memory=404122529792, box_type=hot, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=32212254720}], id [2343914699]
[2022-09-15T05:34:57,977][ERROR][o.e.x.m.c.n.NodeStatsCollector] [data-node] collector [node_stats] timed out when collecting data: node [TjAYjkLwSz6O64RgnuOTtQ] did not respond within [10s]

Could anyone please tell me what I should do to avoid or fix this?

What is the full output of the cluster stats API?

cluster stats at the time of the event or at ordinary times?
if you mean at the time of the event , I can hardly get it because cluster stops respond to cluster management request——only if I can discover in time and call the API immediately,then the API may(or may not) respond after about 15~20min, so I don't have it now. I'll try next time it occurs.

Here is the output I just collected:

{
    "_nodes": {
      "total": 87,
      "successful": 87,
      "failed": 0
    },
    "cluster_name": "elasticsearch",
    "cluster_uuid": "pzOBDLk6QD-5Nh3IAkEI9g",
    "timestamp": 1663309925484,
    "status": "green",
    "indices": {
      "count": 123,
      "shards": {
        "total": 10187,
        "primaries": 3404,
        "replication": 1.9926556991774382,
        "index": {
          "shards": {
            "min": 2,
            "max": 168,
            "avg": 82.82113821138212
          },
          "primaries": {
            "min": 1,
            "max": 56,
            "avg": 27.67479674796748
          },
          "replication": {
            "min": 1,
            "max": 2,
            "avg": 1.7967479674796747
          }
        }
      },
      "docs": {
        "count": 555750877735,
        "deleted": 6775437
      },
      "store": {
        "size": "276tb",
        "size_in_bytes": 303491316410363,
        "total_data_set_size": "276tb",
        "total_data_set_size_in_bytes": 303491316410363,
        "reserved": "0b",
        "reserved_in_bytes": 0
      },
      "fielddata": {
        "memory_size": "29.4gb",
        "memory_size_in_bytes": 31643923536,
        "evictions": 0
      },
      "query_cache": {
        "memory_size": "11.3gb",
        "memory_size_in_bytes": 12191513389,
        "total_count": 21830989241,
        "hit_count": 1550809550,
        "miss_count": 20280179691,
        "cache_size": 5966297,
        "cache_count": 35197935,
        "evictions": 29231638
      },
      "completion": {
        "size": "0b",
        "size_in_bytes": 0
      },
      "segments": {
        "count": 234255,
        "memory": "44.2gb",
        "memory_in_bytes": 47536094348,
        "terms_memory": "41.4gb",
        "terms_memory_in_bytes": 44461779440,
        "stored_fields_memory": "940mb",
        "stored_fields_memory_in_bytes": 985717464,
        "term_vectors_memory": "0b",
        "term_vectors_memory_in_bytes": 0,
        "norms_memory": "9.1kb",
        "norms_memory_in_bytes": 9344,
        "points_memory": "0b",
        "points_memory_in_bytes": 0,
        "doc_values_memory": "1.9gb",
        "doc_values_memory_in_bytes": 2088588100,
        "index_writer_memory": "7.5gb",
        "index_writer_memory_in_bytes": 8062276182,
        "version_map_memory": "47.5mb",
        "version_map_memory_in_bytes": 49874621,
        "fixed_bit_set": "3.8mb",
        "fixed_bit_set_memory_in_bytes": 3997056,
        "max_unsafe_auto_id_timestamp": 1663286408875,
        "file_sizes": {}
      },
      "mappings": {
        "field_types": [
          {
            "name": "boolean",
            "count": 46,
            "index_count": 18,
            "script_count": 0
          },
          {
            "name": "constant_keyword",
            "count": 6,
            "index_count": 2,
            "script_count": 0
          },
          {
            "name": "date",
            "count": 96,
            "index_count": 23,
            "script_count": 0
          },
          {
            "name": "float",
            "count": 317,
            "index_count": 92,
            "script_count": 0
          },
          {
            "name": "half_float",
            "count": 56,
            "index_count": 14,
            "script_count": 0
          },
          {
            "name": "integer",
            "count": 154,
            "index_count": 7,
            "script_count": 0
          },
          {
            "name": "ip",
            "count": 2,
            "index_count": 2,
            "script_count": 0
          },
          {
            "name": "keyword",
            "count": 93446,
            "index_count": 114,
            "script_count": 0
          },
          {
            "name": "long",
            "count": 1530,
            "index_count": 112,
            "script_count": 0
          },
          {
            "name": "nested",
            "count": 24,
            "index_count": 10,
            "script_count": 0
          },
          {
            "name": "object",
            "count": 4342,
            "index_count": 102,
            "script_count": 0
          },
          {
            "name": "text",
            "count": 47,
            "index_count": 16,
            "script_count": 0
          },
          {
            "name": "version",
            "count": 3,
            "index_count": 3,
            "script_count": 0
          }
        ],
        "runtime_field_types": []
      },
      "analysis": {
        "char_filter_types": [],
        "tokenizer_types": [],
        "filter_types": [],
        "analyzer_types": [],
        "built_in_char_filters": [],
        "built_in_tokenizers": [],
        "built_in_filters": [],
        "built_in_analyzers": []
      },
      "versions": [
        {
          "version": "7.3.1",
          "index_count": 15,
          "primary_shard_count": 386,
          "total_primary_size": "11.1gb",
          "total_primary_bytes": 11966991852
        },
        {
          "version": "7.17.1",
          "index_count": 108,
          "primary_shard_count": 3018,
          "total_primary_size": "91.7tb",
          "total_primary_bytes": 100914549194604
        }
      ]
    },
    "nodes": {
      "count": {
        "total": 87,
        "coordinating_only": 0,
        "data": 56,
        "data_cold": 56,
        "data_content": 56,
        "data_frozen": 56,
        "data_hot": 56,
        "data_warm": 56,
        "ingest": 87,
        "master": 3,
        "ml": 87,
        "remote_cluster_client": 87,
        "transform": 56,
        "voting_only": 0
      },
      "versions": [
        "7.17.1"
      ],
      "os": {
        "available_processors": 7536,
        "allocated_processors": 7536,
        "names": [
          {
            "name": "Linux",
            "count": 87
          }
        ],
        "pretty_names": [
          {
            "pretty_name": "CentOS Linux 7 (Core)",
            "count": 87
          }
        ],
        "architectures": [
          {
            "arch": "amd64",
            "count": 87
          }
        ],
        "mem": {
          "total": "31.8tb",
          "total_in_bytes": 35025297195008,
          "free": "1.4tb",
          "free_in_bytes": 1563216474112,
          "used": "30.4tb",
          "used_in_bytes": 33462080720896,
          "free_percent": 4,
          "used_percent": 96
        }
      },
      "process": {
        "cpu": {
          "percent": 165
        },
        "open_file_descriptors": {
          "min": 2280,
          "max": 5249,
          "avg": 4116
        }
      },
      "jvm": {
        "max_uptime": "91.8d",
        "max_uptime_in_millis": 7932660809,
        "versions": [
          {
            "version": "17.0.2",
            "vm_name": "OpenJDK 64-Bit Server VM",
            "vm_version": "17.0.2+8",
            "vm_vendor": "Eclipse Adoptium",
            "bundled_jdk": true,
            "using_bundled_jdk": true,
            "count": 87
          }
        ],
        "mem": {
          "heap_used": "1.3tb",
          "heap_used_in_bytes": 1527297226392,
          "heap_max": "2.5tb",
          "heap_max_in_bytes": 2802466160640
        },
        "threads": 30352
      },
      "fs": {
        "total": "401.2tb",
        "total_in_bytes": 441204275675136,
        "free": "124.6tb",
        "free_in_bytes": 137011178471424,
        "available": "104.3tb",
        "available_in_bytes": 114774525612032
      },
      "plugins": [],
      "network_types": {
        "transport_types": {
          "security4": 87
        },
        "http_types": {
          "security4": 87
        }
      },
      "discovery_types": {
        "zen": 87
      },
      "packaging_types": [
        {
          "flavor": "default",
          "type": "tar",
          "count": 87
        }
      ],
      "ingest": {
        "number_of_pipelines": 2,
        "processor_stats": {
          "gsub": {
            "count": 0,
            "failed": 0,
            "current": 0,
            "time": "0s",
            "time_in_millis": 0
          },
          "script": {
            "count": 0,
            "failed": 0,
            "current": 0,
            "time": "0s",
            "time_in_millis": 0
          }
        }
      }
    }
  }

Thanks!

management hot threads:

100.1%	[cpu=100.1%,	other=0.0%]	(500.6ms	out	of	500ms)	cpu	usage	by	thread	'elasticsearch[<data_node_name>][management][T#1]'
	10/10	snapshots	sharing	following	62	elements
	java.base@17.0.2/java.lang.ThreadLocal$ThreadLocalMap.expungeStaleEntry(ThreadLocal.java:632)
	java.base@17.0.2/java.lang.ThreadLocal$ThreadLocalMap.remove(ThreadLocal.java:516)
	java.base@17.0.2/java.lang.ThreadLocal.remove(ThreadLocal.java:242)
	java.base@17.0.2/java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:430)
	java.base@17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1094)
	java.base@17.0.2/java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:897)
	app//org.elasticsearch.common.util.concurrent.ReleasableLock.close(ReleasableLock.java:38)
	app//org.elasticsearch.index.translog.Translog.getLastSyncedCheckpoint(Translog.java:648)
	app//org.elasticsearch.index.translog.Translog.getLastSyncedGlobalCheckpoint(Translog.java:642)
	app//org.elasticsearch.index.engine.InternalEngine.getLastSyncedGlobalCheckpoint(InternalEngine.java:2831)
	app//org.elasticsearch.index.shard.IndexShard.getLastSyncedGlobalCheckpoint(IndexShard.java:2795)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.globalCheckpoint(TransportReplicationAction.java:1173)
	app//org.elasticsearch.action.support.replication.ReplicationOperation$1$$Lambda$6675/0x0000000801ab9688.getAsLong(Unknown	Source)
	app//org.elasticsearch.action.support.replication.ReplicationOperation.updateCheckPoints(ReplicationOperation.java:307)
	app//org.elasticsearch.action.support.replication.ReplicationOperation.access$200(ReplicationOperation.java:46)
	app//org.elasticsearch.action.support.replication.ReplicationOperation$1.onResponse(ReplicationOperation.java:158)
	app//org.elasticsearch.action.support.replication.ReplicationOperation$1.onResponse(ReplicationOperation.java:152)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryResult.runPostReplicationActions(TransportReplicationAction.java:578)
	app//org.elasticsearch.action.support.replication.ReplicationOperation.handlePrimaryResult(ReplicationOperation.java:152)
	app//org.elasticsearch.action.support.replication.ReplicationOperation$$Lambda$6670/0x0000000801ab86c8.accept(Unknown	Source)
	app//org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:136)
	app//org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:447)
	app//org.elasticsearch.index.seqno.GlobalCheckpointSyncAction.shardOperationOnPrimary(GlobalCheckpointSyncAction.java:95)
	app//org.elasticsearch.index.seqno.GlobalCheckpointSyncAction.shardOperationOnPrimary(GlobalCheckpointSyncAction.java:40)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:1153)
	app//org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:124)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.runWithPrimaryShardReference(TransportReplicationAction.java:508)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.lambda$doRun$0(TransportReplicationAction.java:414)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction$$Lambda$6665/0x0000000801ab39e8.accept(Unknown	Source)
	app//org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:136)
	app//org.elasticsearch.index.shard.IndexShard.lambda$wrapPrimaryOperationPermitListener$23(IndexShard.java:3438)
	app//org.elasticsearch.index.shard.IndexShard$$Lambda$6607/0x0000000801aa4470.accept(Unknown	Source)
	app//org.elasticsearch.action.ActionListener$DelegatingFailureActionListener.onResponse(ActionListener.java:219)
	app//org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:253)
	app//org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:199)
	app//org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationPermit(IndexShard.java:3409)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryOperationPermit(TransportReplicationAction.java:1090)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:411)
	app//org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction.handlePrimaryRequest(TransportReplicationAction.java:355)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$$Lambda$4548/0x0000000801726228.messageReceived(Unknown	Source)
	org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:341)
	app//org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:404)
	org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:394)
	org.elasticsearch.xpack.security.authz.AuthorizationService.authorizeSystemUser(AuthorizationService.java:620)
	org.elasticsearch.xpack.security.authz.AuthorizationService.authorize(AuthorizationService.java:250)
	org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile.lambda$inbound$1(ServerTransportFilter.java:136)
	org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile$$Lambda$5687/0x0000000801802bd0.accept(Unknown	Source)
	app//org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:136)
	app//org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:101)
	org.elasticsearch.xpack.security.authc.AuthenticatorChain.authenticateAsync(AuthenticatorChain.java:102)
	org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:199)
	org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile.inbound(ServerTransportFilter.java:128)
	org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:415)
	app//org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:67)
	app//org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:1045)
	app//org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777)
	app//org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	java.base@17.0.2/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	java.base@17.0.2/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	java.base@17.0.2/java.lang.Thread.run(Thread.java:833)
	
	
	100.1%	[cpu=100.1%,	other=0.0%]	(500.3ms	out	of	500ms)	cpu	usage	by	thread	'elasticsearch[<data_node_name>][management][T#4]'
	10/10	snapshots	sharing	following	62	elements
	java.base@17.0.2/java.lang.ThreadLocal$ThreadLocalMap.expungeStaleEntry(ThreadLocal.java:632)
	java.base@17.0.2/java.lang.ThreadLocal$ThreadLocalMap.remove(ThreadLocal.java:516)
	java.base@17.0.2/java.lang.ThreadLocal.remove(ThreadLocal.java:242)
	java.base@17.0.2/java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:430)
	java.base@17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1094)
	java.base@17.0.2/java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:897)
	app//org.elasticsearch.common.util.concurrent.ReleasableLock.close(ReleasableLock.java:38)
	app//org.elasticsearch.index.translog.Translog.getLastSyncedCheckpoint(Translog.java:648)
	app//org.elasticsearch.index.translog.Translog.getLastSyncedGlobalCheckpoint(Translog.java:642)
	app//org.elasticsearch.index.engine.InternalEngine.getLastSyncedGlobalCheckpoint(InternalEngine.java:2831)
	app//org.elasticsearch.index.shard.IndexShard.getLastSyncedGlobalCheckpoint(IndexShard.java:2795)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.globalCheckpoint(TransportReplicationAction.java:1173)
	app//org.elasticsearch.action.support.replication.ReplicationOperation$1$$Lambda$6675/0x0000000801ab9688.getAsLong(Unknown	Source)
	app//org.elasticsearch.action.support.replication.ReplicationOperation.updateCheckPoints(ReplicationOperation.java:307)
	app//org.elasticsearch.action.support.replication.ReplicationOperation.access$200(ReplicationOperation.java:46)
	app//org.elasticsearch.action.support.replication.ReplicationOperation$1.onResponse(ReplicationOperation.java:158)
	app//org.elasticsearch.action.support.replication.ReplicationOperation$1.onResponse(ReplicationOperation.java:152)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryResult.runPostReplicationActions(TransportReplicationAction.java:578)
	app//org.elasticsearch.action.support.replication.ReplicationOperation.handlePrimaryResult(ReplicationOperation.java:152)
	app//org.elasticsearch.action.support.replication.ReplicationOperation$$Lambda$6670/0x0000000801ab86c8.accept(Unknown	Source)
	app//org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:136)
	app//org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:447)
	app//org.elasticsearch.index.seqno.GlobalCheckpointSyncAction.shardOperationOnPrimary(GlobalCheckpointSyncAction.java:95)
	app//org.elasticsearch.index.seqno.GlobalCheckpointSyncAction.shardOperationOnPrimary(GlobalCheckpointSyncAction.java:40)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:1153)
	app//org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:124)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.runWithPrimaryShardReference(TransportReplicationAction.java:508)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.lambda$doRun$0(TransportReplicationAction.java:414)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction$$Lambda$6665/0x0000000801ab39e8.accept(Unknown	Source)
	app//org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:136)
	app//org.elasticsearch.index.shard.IndexShard.lambda$wrapPrimaryOperationPermitListener$23(IndexShard.java:3438)
	app//org.elasticsearch.index.shard.IndexShard$$Lambda$6607/0x0000000801aa4470.accept(Unknown	Source)
	app//org.elasticsearch.action.ActionListener$DelegatingFailureActionListener.onResponse(ActionListener.java:219)
	app//org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:253)
	app//org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:199)
	app//org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationPermit(IndexShard.java:3409)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryOperationPermit(TransportReplicationAction.java:1090)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:411)
	app//org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction.handlePrimaryRequest(TransportReplicationAction.java:355)
	app//org.elasticsearch.action.support.replication.TransportReplicationAction$$Lambda$4548/0x0000000801726228.messageReceived(Unknown	Source)
	org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:341)
	app//org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:404)
	org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$3.onResponse(SecurityServerTransportInterceptor.java:394)
	org.elasticsearch.xpack.security.authz.AuthorizationService.authorizeSystemUser(AuthorizationService.java:620)
	org.elasticsearch.xpack.security.authz.AuthorizationService.authorize(AuthorizationService.java:250)
	org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile.lambda$inbound$1(ServerTransportFilter.java:136)
	org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile$$Lambda$5687/0x0000000801802bd0.accept(Unknown	Source)
	app//org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:136)
	app//org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:101)
	org.elasticsearch.xpack.security.authc.AuthenticatorChain.authenticateAsync(AuthenticatorChain.java:102)
	org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:199)
	org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile.inbound(ServerTransportFilter.java:128)
	org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:415)
	app//org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:67)
	app//org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:1045)
	app//org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777)
	app//org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
	java.base@17.0.2/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	java.base@17.0.2/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	java.base@17.0.2/java.lang.Thread.run(Thread.java:833)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.