Do very large segments lead to a significant increase in slow logs?

I have an Elasticsearch index that continuously receives new data, along with a certain proportion of update and delete operations. As a result, the docs.deleted count for this index is gradually increasing, and after a few weeks, it reaches the deletes_pct_allowed threshold, triggering a cleanup. The CPU usage of the cluster nodes and the average query time show a constant increase, and then suddenly drop after a few weeks.

To shorten this cycle, I added a scheduled task in crontab that executes _forcemerge?only_expunge_deletes=true daily to clean up deleted documents. However, this raises an issue: some larger segments cannot participate in merging due to being close to the max_merged_segment limit of 5GB, hence only_expunge_deletes does not allow these segments to be merged.

To clean up the deleted documents within these large segments, I performed a force merge using _forcemerge?max_num_segments=1 two days ago. After executing this, a very large segment (15GB) was generated, and many slow logs exceeding 1 second appeared, whereas previously there were almost none exceeding 1 second.

Could this increase in slow logs be caused by the segments being too large? Given the current situation I am facing, how should I address this? It seems that segments can only continue to grow larger and there is no way to shrink them.