Elasticsearch 7.4 query_then_fetch slow log

What is the average shard size for the index you are querying?
The index I queried has 12 shards 1 and 1 copy, the total number of primary shards is 500G, and the total data collection is 1000G. The number of documents is 1.3 billion.

How many indices/shards are you querying at a time?
For example, if I check the last two day in kibana's discover, the index will query access-log-2023.07.31 and access-log-2023.08.01

How much data does each node hold?
The entire cluster has more than 900 indexes, a total of 4500 fragments, which are evenly distributed on 80 nodes, and each node has about 55 fragments

What type of storage are you using? Local SSDs
High-performance SSD disks are used

The data distribution strategy is managed by elasticsearch itself and distributed evenly. There are indexes of different sizes in the cluster for log data storage and various logs. I turned on the slow query log for more than 30s, some shards can return in 30s, 40s, and some shards’ slow log record time will be the same as the {"timeout":"300000ms"} time in the _search sent by the request body Same, how long is set here, took[5m], took_millis[300577] in the slow log of the slow shard is the same as this data, set {"timeout":"60000ms"}, the slow log returned by the slow shard is taken [1m]. It feels like this shard will not respond.
Query when the timeout is set to 300000ms, Kibana index monitoring items, "request rate", "request time(ms)", "latency", these three monitoring values ​​will have a value in the first 2 minutes, and then stabilized by 2 Minutes, there will be a value in the next 2 minutes, and there will be 2 minutes of idle time in the middle. I don't know what this phenomenon means.